Test Report: Hyperkit_macOS 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-19:35416
                    
                

Test fail (3/339)

Order failed test Duration
232 TestMountStart/serial/StartWithMountSecond 75.94
298 TestNoKubernetes/serial/StartWithStopK8s 185.57
301 TestNoKubernetes/serial/Start 180.35
x
+
TestMountStart/serial/StartWithMountSecond (75.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-576000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-2-576000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 90 (1m15.774997783s)

                                                
                                                
-- stdout --
	* [mount-start-2-576000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-2-576000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 14:55:33 mount-start-2-576000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 14:55:33 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:33.251587807Z" level=info msg="Starting up"
	Jul 19 14:55:33 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:33.252413371Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 14:55:33 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:33.253083635Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=527
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.268224350Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.283505435Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.283569425Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.283632804Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.283667833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.283748428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.283785182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.283927922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.283968924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.284002371Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.284030884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.284112221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.284319612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.285895748Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.285952395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.286086834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.286157725Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.286249822Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.286313827Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.288964042Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289048235Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289128094Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289173400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289216934Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289308469Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289517701Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289620205Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289657472Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289690216Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289721707Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289755713Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289785610Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289816695Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289853154Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289886147Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289914495Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289941956Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.289979095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290010092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290038978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290068750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290100566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290129096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290157744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290223480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290259231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290290628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290319053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290347037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290374965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290405455Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290440595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290470513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290499430Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290569898Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290615370Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290645052Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290673396Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290700573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290727791Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290755657Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.290925093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.291034379Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.291119381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 14:55:33 mount-start-2-576000 dockerd[527]: time="2024-07-19T14:55:33.291184603Z" level=info msg="containerd successfully booted in 0.023594s"
	Jul 19 14:55:34 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:34.275445599Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 14:55:34 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:34.281056164Z" level=info msg="Loading containers: start."
	Jul 19 14:55:34 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:34.382199426Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 14:55:34 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:34.466594067Z" level=info msg="Loading containers: done."
	Jul 19 14:55:34 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:34.473360638Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 14:55:34 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:34.473444152Z" level=info msg="Daemon has completed initialization"
	Jul 19 14:55:34 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:34.503959516Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 14:55:34 mount-start-2-576000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 14:55:34 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:34.504189827Z" level=info msg="API listen on [::]:2376"
	Jul 19 14:55:35 mount-start-2-576000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 14:55:35 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:35.445404396Z" level=info msg="Processing signal 'terminated'"
	Jul 19 14:55:35 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:35.446505085Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 14:55:35 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:35.446596594Z" level=info msg="Daemon shutdown complete"
	Jul 19 14:55:35 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:35.446632912Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 14:55:35 mount-start-2-576000 dockerd[521]: time="2024-07-19T14:55:35.446666952Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 14:55:36 mount-start-2-576000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 14:55:36 mount-start-2-576000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 14:55:36 mount-start-2-576000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 14:55:36 mount-start-2-576000 dockerd[865]: time="2024-07-19T14:55:36.486392644Z" level=info msg="Starting up"
	Jul 19 14:56:36 mount-start-2-576000 dockerd[865]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 14:56:36 mount-start-2-576000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 14:56:36 mount-start-2-576000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 14:56:36 mount-start-2-576000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-2-576000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-576000 -n mount-start-2-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-576000 -n mount-start-2-576000: exit status 6 (160.281819ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 07:56:36.662831    4132 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-576000" does not appear in /Users/jenkins/minikube-integration/19302-1032/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-576000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/StartWithMountSecond (75.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (185.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-273000 --no-kubernetes --driver=hyperkit 
E0719 08:34:29.222509    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-273000 --no-kubernetes --driver=hyperkit : exit status 90 (1m4.460168408s)

                                                
                                                
-- stdout --
	* [NoKubernetes-273000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-273000
	* Updating the running hyperkit "NoKubernetes-273000" VM ...
	  - Kubernetes: Stopping ...
	  - Kubernetes: Stopped
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 15:33:54 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:54.592099296Z" level=info msg="Starting up"
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:54.592549983Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:54.593168654Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=539
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.612022462Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627146952Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627170480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627230221Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627266458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627326959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627337103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627462320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627497461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627509537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627516968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627602960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627767339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629316724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629354797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629461187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629495175Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629564138Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629628997Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.648410342Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.648823134Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.648878407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.648937192Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.648997163Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.649145215Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650180764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650359176Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650446669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650460990Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650470445Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650479513Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650488338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650497527Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650507156Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650515692Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650523638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650532052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650586315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650600843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650609653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650618661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650626759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650635413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650643325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650651767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650660557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650670200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650684561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650759525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650771519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650782552Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650803486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650814119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650821726Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650896650Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650934739Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650947225Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650955616Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650963490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651061483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651074151Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651303867Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651361845Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651419310Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651452641Z" level=info msg="containerd successfully booted in 0.040120s"
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.653716784Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.659933008Z" level=info msg="Loading containers: start."
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.750241885Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.842744680Z" level=info msg="Loading containers: done."
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.853833822Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.853921213Z" level=info msg="Daemon has completed initialization"
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.877908561Z" level=info msg="API listen on [::]:2376"
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.877994406Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 15:33:55 NoKubernetes-273000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 15:33:56 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:56.875595064Z" level=info msg="Processing signal 'terminated'"
	Jul 19 15:33:56 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:56.876405264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 15:33:56 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:56.876838833Z" level=info msg="Daemon shutdown complete"
	Jul 19 15:33:56 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:56.876921920Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 15:33:56 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:56.876930923Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 15:33:56 NoKubernetes-273000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 15:33:57 NoKubernetes-273000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 15:33:57 NoKubernetes-273000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 15:33:57 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:57.916792440Z" level=info msg="Starting up"
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:57.917244221Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:57.917775384Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=884
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.936136771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951691476Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951714068Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951737995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951747695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951796169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951829404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951930654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951964622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951977284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951984650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.952001226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.952078460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953637707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953676768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953783302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953817754Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953842070Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953857828Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954028993Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954072670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954085857Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954106461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954117883Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954149255Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954279627Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954384667Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954421661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954434099Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954443836Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954452259Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954460235Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954469196Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954478776Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954493190Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954504434Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954512285Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954535706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954547924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954556680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954565589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954573774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954582132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954592353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954600257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954608614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954617737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954625134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954637701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954648446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954686383Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954723754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954733572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954741289Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954789095Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954823803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954833709Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954842124Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954848537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954857463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954864877Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.955344434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.955409444Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.955443711Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.955775216Z" level=info msg="containerd successfully booted in 0.020087s"
	Jul 19 15:33:58 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:58.954946714Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 15:33:58 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:58.968227528Z" level=info msg="Loading containers: start."
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.039346311Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.424365521Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.528341144Z" level=info msg="Loading containers: done."
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.628667141Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.628908043Z" level=info msg="Daemon has completed initialization"
	Jul 19 15:33:59 NoKubernetes-273000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.659273750Z" level=info msg="API listen on [::]:2376"
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.659362999Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 15:34:04 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:34:04.580862194Z" level=info msg="Processing signal 'terminated'"
	Jul 19 15:34:04 NoKubernetes-273000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 15:34:04 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:34:04.584603051Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 15:34:04 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:34:04.585031915Z" level=info msg="Daemon shutdown complete"
	Jul 19 15:34:04 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:34:04.585070533Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 15:34:04 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:34:04.585086140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 15:34:05 NoKubernetes-273000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 15:34:05 NoKubernetes-273000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 15:34:05 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:05.625979164Z" level=info msg="Starting up"
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:05.626399195Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:05.626948257Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1242
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.642047836Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657479972Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657528926Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657579041Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657591207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657610038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657618641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657724074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657758545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657770618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657777698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657800511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657877911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659424174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659462492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659573654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659607121Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659624764Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659636470Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659786532Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659828506Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659841420Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659851153Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659862317Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659927190Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660123816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660183151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660216314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660227902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660236565Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660244611Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660253211Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660267950Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660279663Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660288183Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660296336Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660303689Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660316560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660325886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660334794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660350239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660360476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660370803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660378770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660386370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660394492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660403685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660411142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660419079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660426984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660436603Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660450731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660458951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660466686Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660517327Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660551225Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660560955Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660569006Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660575267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660583832Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660590460Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660749363Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660824535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660879957Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660893334Z" level=info msg="containerd successfully booted in 0.019227s"
	Jul 19 15:34:06 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:06.665697420Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.076734180Z" level=info msg="Loading containers: start."
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.146934373Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.206613610Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.248236931Z" level=info msg="Loading containers: done."
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.256182205Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.256242797Z" level=info msg="Daemon has completed initialization"
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.279752016Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 15:34:07 NoKubernetes-273000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.281301748Z" level=info msg="API listen on [::]:2376"
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.334056889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.334313578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.334379679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.334542541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.338899517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.338942081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.338953436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.339011309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.342486966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.342522983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.342533912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.342629009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.343116935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.343188716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.343198632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.343255904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.547832299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.547870855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.547888902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.548009216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.579936289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.579970171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.579983550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.580040842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.578797323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.578908905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.578922204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.578986963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.584982759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.585053665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.585069638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.585153320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.564154473Z" level=info msg="ignoring event" container=4ed4486df69e764ddec977619c61345a70cb86695f02086d6be91925ab014bd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.564256092Z" level=info msg="shim disconnected" id=4ed4486df69e764ddec977619c61345a70cb86695f02086d6be91925ab014bd9 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.564293711Z" level=warning msg="cleaning up after shim disconnected" id=4ed4486df69e764ddec977619c61345a70cb86695f02086d6be91925ab014bd9 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.564299974Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.630391009Z" level=info msg="ignoring event" container=db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.631489458Z" level=info msg="shim disconnected" id=db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.640385339Z" level=warning msg="cleaning up after shim disconnected" id=db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.640419085Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.644933660Z" level=info msg="shim disconnected" id=57cf6ec405518c5f5188d97f60a3e5a4a4ca37402594ba66385d2646cce24290 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.646808419Z" level=info msg="ignoring event" container=57cf6ec405518c5f5188d97f60a3e5a4a4ca37402594ba66385d2646cce24290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.647011743Z" level=info msg="ignoring event" container=77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.647232306Z" level=warning msg="cleaning up after shim disconnected" id=57cf6ec405518c5f5188d97f60a3e5a4a4ca37402594ba66385d2646cce24290 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.647283173Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.660468525Z" level=info msg="shim disconnected" id=77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.662550614Z" level=info msg="ignoring event" container=85e4dbef45d040ea9b7496cc64ab94bee70cf90fc0dad47af0f6e093d4b51130 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.662736525Z" level=info msg="ignoring event" container=d44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.663196296Z" level=info msg="ignoring event" container=ad44cad4f3ca2ae3ab73f720b4147242f3ba9054868aa14b6d51db5dfc00c17f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.663257441Z" level=info msg="ignoring event" container=f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.663502675Z" level=warning msg="cleaning up after shim disconnected" id=77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.662058139Z" level=info msg="shim disconnected" id=f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.663740436Z" level=warning msg="cleaning up after shim disconnected" id=f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.663789120Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.660606565Z" level=info msg="shim disconnected" id=ad44cad4f3ca2ae3ab73f720b4147242f3ba9054868aa14b6d51db5dfc00c17f namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.664054088Z" level=warning msg="cleaning up after shim disconnected" id=ad44cad4f3ca2ae3ab73f720b4147242f3ba9054868aa14b6d51db5dfc00c17f namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.664129557Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.663700578Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.660621597Z" level=info msg="shim disconnected" id=85e4dbef45d040ea9b7496cc64ab94bee70cf90fc0dad47af0f6e093d4b51130 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.668582855Z" level=warning msg="cleaning up after shim disconnected" id=85e4dbef45d040ea9b7496cc64ab94bee70cf90fc0dad47af0f6e093d4b51130 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.669955125Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.662041194Z" level=info msg="shim disconnected" id=d44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.673645992Z" level=warning msg="cleaning up after shim disconnected" id=d44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.673741494Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:22 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:22.457276594Z" level=info msg="Processing signal 'terminated'"
	Jul 19 15:34:22 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:22.458173737Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 15:34:22 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:22.458748742Z" level=info msg="Daemon shutdown complete"
	Jul 19 15:34:22 NoKubernetes-273000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 15:34:23 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:23.006491278Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 15:34:23 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:23.006549663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 15:34:24 NoKubernetes-273000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 15:34:24 NoKubernetes-273000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 15:34:24 NoKubernetes-273000 systemd[1]: docker.service: Consumed 1.156s CPU time.
	Jul 19 15:34:24 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 15:34:24 NoKubernetes-273000 dockerd[2606]: time="2024-07-19T15:34:24.046626973Z" level=info msg="Starting up"
	Jul 19 15:35:24 NoKubernetes-273000 dockerd[2606]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 15:35:24 NoKubernetes-273000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 15:35:24 NoKubernetes-273000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 15:35:24 NoKubernetes-273000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-273000 --no-kubernetes --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-273000 -n NoKubernetes-273000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-273000 -n NoKubernetes-273000: exit status 2 (152.041248ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestNoKubernetes/serial/StartWithStopK8s FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-273000 logs -n 25
E0719 08:35:52.283747    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 08:36:33.496675    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p NoKubernetes-273000 logs -n 25: (2m0.774702715s)
helpers_test.go:252: TestNoKubernetes/serial/StartWithStopK8s logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-458000           | cert-expiration-458000    | jenkins | v1.33.1 | 19 Jul 24 08:29 PDT | 19 Jul 24 08:29 PDT |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h             |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| start   | -p running-upgrade-569000           | running-upgrade-569000    | jenkins | v1.33.1 | 19 Jul 24 08:29 PDT | 19 Jul 24 08:29 PDT |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-458000           | cert-expiration-458000    | jenkins | v1.33.1 | 19 Jul 24 08:29 PDT | 19 Jul 24 08:29 PDT |
	| start   | -p kubernetes-upgrade-626000        | kubernetes-upgrade-626000 | jenkins | v1.33.1 | 19 Jul 24 08:29 PDT | 19 Jul 24 08:30 PDT |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-569000           | running-upgrade-569000    | jenkins | v1.33.1 | 19 Jul 24 08:29 PDT | 19 Jul 24 08:29 PDT |
	| start   | -p stopped-upgrade-958000           | minikube                  | jenkins | v1.26.0 | 19 Jul 24 08:29 PDT | 19 Jul 24 08:30 PDT |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --vm-driver=hyperkit                |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-626000        | kubernetes-upgrade-626000 | jenkins | v1.33.1 | 19 Jul 24 08:30 PDT | 19 Jul 24 08:30 PDT |
	| start   | -p kubernetes-upgrade-626000        | kubernetes-upgrade-626000 | jenkins | v1.33.1 | 19 Jul 24 08:30 PDT | 19 Jul 24 08:33 PDT |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-958000 stop         | minikube                  | jenkins | v1.26.0 | 19 Jul 24 08:30 PDT | 19 Jul 24 08:30 PDT |
	| start   | -p stopped-upgrade-958000           | stopped-upgrade-958000    | jenkins | v1.33.1 | 19 Jul 24 08:30 PDT | 19 Jul 24 08:31 PDT |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-958000           | stopped-upgrade-958000    | jenkins | v1.33.1 | 19 Jul 24 08:31 PDT | 19 Jul 24 08:31 PDT |
	| start   | -p pause-571000 --memory=2048       | pause-571000              | jenkins | v1.33.1 | 19 Jul 24 08:31 PDT | 19 Jul 24 08:33 PDT |
	|         | --install-addons=false              |                           |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit        |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-626000        | kubernetes-upgrade-626000 | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT |                     |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-626000        | kubernetes-upgrade-626000 | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT | 19 Jul 24 08:33 PDT |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1              |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| start   | -p pause-571000                     | pause-571000              | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT | 19 Jul 24 08:33 PDT |
	|         | --alsologtostderr -v=1              |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-626000        | kubernetes-upgrade-626000 | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT | 19 Jul 24 08:33 PDT |
	| start   | -p NoKubernetes-273000              | NoKubernetes-273000       | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT |                     |
	|         | --no-kubernetes                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20           |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-273000              | NoKubernetes-273000       | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT | 19 Jul 24 08:34 PDT |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| pause   | -p pause-571000                     | pause-571000              | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT | 19 Jul 24 08:33 PDT |
	|         | --alsologtostderr -v=5              |                           |         |         |                     |                     |
	| unpause | -p pause-571000                     | pause-571000              | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT | 19 Jul 24 08:33 PDT |
	|         | --alsologtostderr -v=5              |                           |         |         |                     |                     |
	| pause   | -p pause-571000                     | pause-571000              | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT | 19 Jul 24 08:33 PDT |
	|         | --alsologtostderr -v=5              |                           |         |         |                     |                     |
	| delete  | -p pause-571000                     | pause-571000              | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT | 19 Jul 24 08:33 PDT |
	|         | --alsologtostderr -v=5              |                           |         |         |                     |                     |
	| delete  | -p pause-571000                     | pause-571000              | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT | 19 Jul 24 08:33 PDT |
	| start   | -p auto-248000 --memory=3072        | auto-248000               | jenkins | v1.33.1 | 19 Jul 24 08:33 PDT |                     |
	|         | --alsologtostderr --wait=true       |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                  |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-273000              | NoKubernetes-273000       | jenkins | v1.33.1 | 19 Jul 24 08:34 PDT |                     |
	|         | --no-kubernetes                     |                           |         |         |                     |                     |
	|         | --driver=hyperkit                   |                           |         |         |                     |                     |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 08:34:19
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 08:34:19.568064    6435 out.go:291] Setting OutFile to fd 1 ...
	I0719 08:34:19.568327    6435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 08:34:19.568330    6435 out.go:304] Setting ErrFile to fd 2...
	I0719 08:34:19.568332    6435 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 08:34:19.568515    6435 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	I0719 08:34:19.570248    6435 out.go:298] Setting JSON to false
	I0719 08:34:19.593608    6435 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5630,"bootTime":1721397629,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0719 08:34:19.593684    6435 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 08:34:19.615576    6435 out.go:177] * [NoKubernetes-273000] minikube v1.33.1 on Darwin 14.5
	I0719 08:34:19.673376    6435 notify.go:220] Checking for updates...
	I0719 08:34:19.695085    6435 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 08:34:19.753224    6435 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	I0719 08:34:19.795905    6435 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 08:34:19.842932    6435 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 08:34:19.883935    6435 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	I0719 08:34:19.924470    6435 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 08:34:19.946080    6435 config.go:182] Loaded profile config "NoKubernetes-273000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 08:34:19.946422    6435 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:34:19.946461    6435 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:34:19.955309    6435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55022
	I0719 08:34:19.955660    6435 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:34:19.956053    6435 main.go:141] libmachine: Using API Version  1
	I0719 08:34:19.956078    6435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:34:19.956266    6435 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:34:19.956383    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .DriverName
	I0719 08:34:19.956488    6435 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0719 08:34:19.956574    6435 start.go:1783] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0719 08:34:19.956592    6435 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 08:34:19.956831    6435 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:34:19.956851    6435 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:34:19.965679    6435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55024
	I0719 08:34:19.966033    6435 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:34:19.966406    6435 main.go:141] libmachine: Using API Version  1
	I0719 08:34:19.966427    6435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:34:19.966646    6435 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:34:19.966742    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .DriverName
	I0719 08:34:20.010894    6435 out.go:177] * Using the hyperkit driver based on existing profile
	I0719 08:34:20.068701    6435 start.go:297] selected driver: hyperkit
	I0719 08:34:20.068709    6435 start.go:901] validating driver "hyperkit" against &{Name:NoKubernetes-273000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-273000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.34 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 08:34:20.068833    6435 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 08:34:20.068905    6435 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0719 08:34:20.068987    6435 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 08:34:20.069095    6435 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1032/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0719 08:34:20.077936    6435 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0719 08:34:20.081805    6435 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:34:20.081821    6435 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0719 08:34:20.084525    6435 cni.go:84] Creating CNI manager for ""
	I0719 08:34:20.084548    6435 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 08:34:20.084568    6435 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0719 08:34:20.084623    6435 start.go:340] cluster config:
	{Name:NoKubernetes-273000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-273000 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.34 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 08:34:20.084714    6435 iso.go:125] acquiring lock: {Name:mkadb9ba7febb03c49d2e1dd7dfa4b91b2759763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 08:34:20.126530    6435 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-273000
	I0719 08:34:20.164006    6435 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime docker
	W0719 08:34:20.226100    6435 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0719 08:34:20.226288    6435 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/NoKubernetes-273000/config.json ...
	I0719 08:34:20.227159    6435 start.go:360] acquireMachinesLock for NoKubernetes-273000: {Name:mke9fa98f500419c1998c374f8c492543e051339 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 08:34:20.227261    6435 start.go:364] duration metric: took 85.264µs to acquireMachinesLock for "NoKubernetes-273000"
	I0719 08:34:20.227288    6435 start.go:96] Skipping create...Using existing machine configuration
	I0719 08:34:20.227305    6435 fix.go:54] fixHost starting: 
	I0719 08:34:20.227776    6435 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:34:20.227805    6435 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:34:20.236942    6435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55026
	I0719 08:34:20.237369    6435 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:34:20.237688    6435 main.go:141] libmachine: Using API Version  1
	I0719 08:34:20.237695    6435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:34:20.237927    6435 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:34:20.238045    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .DriverName
	I0719 08:34:20.238140    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetState
	I0719 08:34:20.238220    6435 main.go:141] libmachine: (NoKubernetes-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:34:20.238313    6435 main.go:141] libmachine: (NoKubernetes-273000) DBG | hyperkit pid from json: 6367
	I0719 08:34:20.239366    6435 fix.go:112] recreateIfNeeded on NoKubernetes-273000: state=Running err=<nil>
	W0719 08:34:20.239381    6435 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 08:34:20.281537    6435 out.go:177] * Updating the running hyperkit "NoKubernetes-273000" VM ...
	I0719 08:34:20.319928    6435 machine.go:94] provisionDockerMachine start ...
	I0719 08:34:20.319952    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .DriverName
	I0719 08:34:20.320396    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:20.320817    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:20.321108    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.321355    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.321506    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:20.321684    6435 main.go:141] libmachine: Using SSH client type: native
	I0719 08:34:20.321967    6435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x76c90c0] 0x76cbe20 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0719 08:34:20.321973    6435 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 08:34:20.386924    6435 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-273000
	
	I0719 08:34:20.386935    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetMachineName
	I0719 08:34:20.387101    6435 buildroot.go:166] provisioning hostname "NoKubernetes-273000"
	I0719 08:34:20.387109    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetMachineName
	I0719 08:34:20.387214    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:20.387294    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:20.387394    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.387487    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.387619    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:20.387763    6435 main.go:141] libmachine: Using SSH client type: native
	I0719 08:34:20.387931    6435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x76c90c0] 0x76cbe20 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0719 08:34:20.387936    6435 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-273000 && echo "NoKubernetes-273000" | sudo tee /etc/hostname
	I0719 08:34:20.462536    6435 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-273000
	
	I0719 08:34:20.462554    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:20.462681    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:20.462785    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.462855    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.462935    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:20.463059    6435 main.go:141] libmachine: Using SSH client type: native
	I0719 08:34:20.463225    6435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x76c90c0] 0x76cbe20 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0719 08:34:20.463234    6435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-273000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-273000/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-273000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 08:34:20.530534    6435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 08:34:20.530549    6435 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1032/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1032/.minikube}
	I0719 08:34:20.530570    6435 buildroot.go:174] setting up certificates
	I0719 08:34:20.530581    6435 provision.go:84] configureAuth start
	I0719 08:34:20.530586    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetMachineName
	I0719 08:34:20.530720    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetIP
	I0719 08:34:20.530812    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:20.530890    6435 provision.go:143] copyHostCerts
	I0719 08:34:20.530966    6435 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1032/.minikube/ca.pem, removing ...
	I0719 08:34:20.530971    6435 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1032/.minikube/ca.pem
	I0719 08:34:20.531095    6435 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1032/.minikube/ca.pem (1082 bytes)
	I0719 08:34:20.531336    6435 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1032/.minikube/cert.pem, removing ...
	I0719 08:34:20.531339    6435 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1032/.minikube/cert.pem
	I0719 08:34:20.531408    6435 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1032/.minikube/cert.pem (1123 bytes)
	I0719 08:34:20.531566    6435 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1032/.minikube/key.pem, removing ...
	I0719 08:34:20.531569    6435 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1032/.minikube/key.pem
	I0719 08:34:20.531635    6435 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1032/.minikube/key.pem (1679 bytes)
	I0719 08:34:20.531773    6435 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-273000 san=[127.0.0.1 192.169.0.34 NoKubernetes-273000 localhost minikube]
	I0719 08:34:20.626089    6435 provision.go:177] copyRemoteCerts
	I0719 08:34:20.626153    6435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 08:34:20.626169    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:20.626312    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:20.626406    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.626507    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:20.626605    6435 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/NoKubernetes-273000/id_rsa Username:docker}
	I0719 08:34:20.667391    6435 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 08:34:20.687519    6435 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 08:34:20.709568    6435 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 08:34:20.729813    6435 provision.go:87] duration metric: took 199.224664ms to configureAuth
	I0719 08:34:20.729821    6435 buildroot.go:189] setting minikube options for container-runtime
	I0719 08:34:20.729949    6435 config.go:182] Loaded profile config "NoKubernetes-273000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0719 08:34:20.729959    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .DriverName
	I0719 08:34:20.730083    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:20.730163    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:20.730244    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.730325    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.730410    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:20.730508    6435 main.go:141] libmachine: Using SSH client type: native
	I0719 08:34:20.730639    6435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x76c90c0] 0x76cbe20 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0719 08:34:20.730654    6435 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 08:34:20.795145    6435 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 08:34:20.795151    6435 buildroot.go:70] root file system type: tmpfs
	I0719 08:34:20.795227    6435 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 08:34:20.795241    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:20.795365    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:20.795460    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.795534    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.795610    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:20.795737    6435 main.go:141] libmachine: Using SSH client type: native
	I0719 08:34:20.795877    6435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x76c90c0] 0x76cbe20 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0719 08:34:20.795919    6435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 08:34:20.871542    6435 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 08:34:20.871561    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:20.871696    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:20.871788    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.871882    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.871973    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:20.872096    6435 main.go:141] libmachine: Using SSH client type: native
	I0719 08:34:20.872239    6435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x76c90c0] 0x76cbe20 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0719 08:34:20.872248    6435 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 08:34:20.942570    6435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 08:34:20.942578    6435 machine.go:97] duration metric: took 622.658336ms to provisionDockerMachine
	I0719 08:34:20.942591    6435 start.go:293] postStartSetup for "NoKubernetes-273000" (driver="hyperkit")
	I0719 08:34:20.942596    6435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 08:34:20.942604    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .DriverName
	I0719 08:34:20.942794    6435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 08:34:20.942803    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:20.942906    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:20.943010    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:20.943114    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:20.943200    6435 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/NoKubernetes-273000/id_rsa Username:docker}
	I0719 08:34:20.983472    6435 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 08:34:20.988377    6435 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 08:34:20.988389    6435 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1032/.minikube/addons for local assets ...
	I0719 08:34:20.988485    6435 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1032/.minikube/files for local assets ...
	I0719 08:34:20.988623    6435 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1032/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0719 08:34:20.988786    6435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 08:34:20.996865    6435 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0719 08:34:21.018017    6435 start.go:296] duration metric: took 75.409062ms for postStartSetup
	I0719 08:34:21.018038    6435 fix.go:56] duration metric: took 790.763721ms for fixHost
	I0719 08:34:21.018072    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:21.018205    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:21.018294    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:21.018378    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:21.018447    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:21.018561    6435 main.go:141] libmachine: Using SSH client type: native
	I0719 08:34:21.018702    6435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x76c90c0] 0x76cbe20 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0719 08:34:21.018706    6435 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 08:34:21.081792    6435 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721403261.319383078
	
	I0719 08:34:21.081800    6435 fix.go:216] guest clock: 1721403261.319383078
	I0719 08:34:21.081804    6435 fix.go:229] Guest: 2024-07-19 08:34:21.319383078 -0700 PDT Remote: 2024-07-19 08:34:21.018041 -0700 PDT m=+1.485174150 (delta=301.342078ms)
	I0719 08:34:21.081823    6435 fix.go:200] guest clock delta is within tolerance: 301.342078ms
	I0719 08:34:21.081825    6435 start.go:83] releasing machines lock for "NoKubernetes-273000", held for 854.580789ms
	I0719 08:34:21.081843    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .DriverName
	I0719 08:34:21.081973    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetIP
	I0719 08:34:21.082056    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .DriverName
	I0719 08:34:21.082331    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .DriverName
	I0719 08:34:21.082424    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .DriverName
	I0719 08:34:21.082531    6435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 08:34:21.082560    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:21.082563    6435 ssh_runner.go:195] Run: cat /version.json
	I0719 08:34:21.082573    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHHostname
	I0719 08:34:21.082647    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:21.082708    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHPort
	I0719 08:34:21.082743    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:21.082793    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHKeyPath
	I0719 08:34:21.082841    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:21.082861    6435 main.go:141] libmachine: (NoKubernetes-273000) Calling .GetSSHUsername
	I0719 08:34:21.082934    6435 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/NoKubernetes-273000/id_rsa Username:docker}
	I0719 08:34:21.082942    6435 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/NoKubernetes-273000/id_rsa Username:docker}
	I0719 08:34:21.164789    6435 ssh_runner.go:195] Run: systemctl --version
	I0719 08:34:21.170418    6435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 08:34:21.204537    6435 out.go:177]   - Kubernetes: Stopping ...
	I0719 08:34:21.224937    6435 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I0719 08:34:21.256623    6435 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	W0719 08:34:21.271299    6435 kubeadm.go:838] found 8 kube-system containers to stop
	I0719 08:34:21.271314    6435 docker.go:483] Stopping containers: [77abac477972 f5e7c8042698 d44c0ff04c29 db10d0ec2376 ad44cad4f3ca 85e4dbef45d0 57cf6ec40551 4ed4486df69e]
	I0719 08:34:21.271374    6435 ssh_runner.go:195] Run: docker stop 77abac477972 f5e7c8042698 d44c0ff04c29 db10d0ec2376 ad44cad4f3ca 85e4dbef45d0 57cf6ec40551 4ed4486df69e
	I0719 08:34:21.501359    6435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 08:34:21.536479    6435 out.go:177]   - Kubernetes: Stopped
	I0719 08:34:21.557798    6435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 08:34:21.563052    6435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 08:34:21.563104    6435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 08:34:21.570526    6435 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 08:34:21.570535    6435 start.go:495] detecting cgroup driver to use...
	I0719 08:34:21.570634    6435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 08:34:21.585488    6435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 08:34:21.593806    6435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 08:34:21.602194    6435 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 08:34:21.602237    6435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 08:34:21.610638    6435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 08:34:21.619118    6435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 08:34:21.627968    6435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 08:34:21.636513    6435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 08:34:21.644951    6435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 08:34:21.653248    6435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 08:34:21.660686    6435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 08:34:21.668348    6435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 08:34:21.765457    6435 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 08:34:21.784663    6435 start.go:495] detecting cgroup driver to use...
	I0719 08:34:21.784729    6435 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 08:34:21.807171    6435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 08:34:21.818834    6435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 08:34:21.839322    6435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 08:34:21.851448    6435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 08:34:21.861914    6435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 08:34:21.877426    6435 ssh_runner.go:195] Run: which cri-dockerd
	I0719 08:34:21.880467    6435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 08:34:21.887572    6435 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 08:34:21.901019    6435 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 08:34:21.999825    6435 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 08:34:22.094152    6435 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 08:34:22.094220    6435 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 08:34:22.108499    6435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 08:34:22.205281    6435 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 08:35:23.799188    6435 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.595468721s)
	I0719 08:35:23.799246    6435 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 08:35:23.845216    6435 out.go:177] 
	W0719 08:35:23.866471    6435 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 15:33:54 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:54.592099296Z" level=info msg="Starting up"
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:54.592549983Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:54.593168654Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=539
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.612022462Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627146952Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627170480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627230221Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627266458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627326959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627337103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627462320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627497461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627509537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627516968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627602960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.627767339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629316724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629354797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629461187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629495175Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629564138Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.629628997Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.648410342Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.648823134Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.648878407Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.648937192Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.648997163Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.649145215Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650180764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650359176Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650446669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650460990Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650470445Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650479513Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650488338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650497527Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650507156Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650515692Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650523638Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650532052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650586315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650600843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650609653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650618661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650626759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650635413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650643325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650651767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650660557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650670200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650684561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650759525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650771519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650782552Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650803486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650814119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650821726Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650896650Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650934739Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650947225Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650955616Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.650963490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651061483Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651074151Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651303867Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651361845Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651419310Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 15:33:54 NoKubernetes-273000 dockerd[539]: time="2024-07-19T15:33:54.651452641Z" level=info msg="containerd successfully booted in 0.040120s"
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.653716784Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.659933008Z" level=info msg="Loading containers: start."
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.750241885Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.842744680Z" level=info msg="Loading containers: done."
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.853833822Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.853921213Z" level=info msg="Daemon has completed initialization"
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.877908561Z" level=info msg="API listen on [::]:2376"
	Jul 19 15:33:55 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:55.877994406Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 15:33:55 NoKubernetes-273000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 15:33:56 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:56.875595064Z" level=info msg="Processing signal 'terminated'"
	Jul 19 15:33:56 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:56.876405264Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 15:33:56 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:56.876838833Z" level=info msg="Daemon shutdown complete"
	Jul 19 15:33:56 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:56.876921920Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 15:33:56 NoKubernetes-273000 dockerd[532]: time="2024-07-19T15:33:56.876930923Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 15:33:56 NoKubernetes-273000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 15:33:57 NoKubernetes-273000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 15:33:57 NoKubernetes-273000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 15:33:57 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:57.916792440Z" level=info msg="Starting up"
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:57.917244221Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:57.917775384Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=884
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.936136771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951691476Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951714068Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951737995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951747695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951796169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951829404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951930654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951964622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951977284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.951984650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.952001226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.952078460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953637707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953676768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953783302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953817754Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953842070Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.953857828Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954028993Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954072670Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954085857Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954106461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954117883Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954149255Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954279627Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954384667Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954421661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954434099Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954443836Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954452259Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954460235Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954469196Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954478776Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954493190Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954504434Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954512285Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954535706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954547924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954556680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954565589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954573774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954582132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954592353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954600257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954608614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954617737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954625134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954637701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954648446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954686383Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954723754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954733572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954741289Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954789095Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954823803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954833709Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954842124Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954848537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954857463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.954864877Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.955344434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.955409444Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.955443711Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 15:33:57 NoKubernetes-273000 dockerd[884]: time="2024-07-19T15:33:57.955775216Z" level=info msg="containerd successfully booted in 0.020087s"
	Jul 19 15:33:58 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:58.954946714Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 15:33:58 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:58.968227528Z" level=info msg="Loading containers: start."
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.039346311Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.424365521Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.528341144Z" level=info msg="Loading containers: done."
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.628667141Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.628908043Z" level=info msg="Daemon has completed initialization"
	Jul 19 15:33:59 NoKubernetes-273000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.659273750Z" level=info msg="API listen on [::]:2376"
	Jul 19 15:33:59 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:33:59.659362999Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 15:34:04 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:34:04.580862194Z" level=info msg="Processing signal 'terminated'"
	Jul 19 15:34:04 NoKubernetes-273000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 15:34:04 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:34:04.584603051Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 15:34:04 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:34:04.585031915Z" level=info msg="Daemon shutdown complete"
	Jul 19 15:34:04 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:34:04.585070533Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 15:34:04 NoKubernetes-273000 dockerd[877]: time="2024-07-19T15:34:04.585086140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 15:34:05 NoKubernetes-273000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 15:34:05 NoKubernetes-273000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 15:34:05 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:05.625979164Z" level=info msg="Starting up"
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:05.626399195Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:05.626948257Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1242
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.642047836Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657479972Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657528926Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657579041Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657591207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657610038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657618641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657724074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657758545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657770618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657777698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657800511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.657877911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659424174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659462492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659573654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659607121Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659624764Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659636470Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659786532Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659828506Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659841420Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659851153Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659862317Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.659927190Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660123816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660183151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660216314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660227902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660236565Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660244611Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660253211Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660267950Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660279663Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660288183Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660296336Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660303689Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660316560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660325886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660334794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660350239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660360476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660370803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660378770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660386370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660394492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660403685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660411142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660419079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660426984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660436603Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660450731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660458951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660466686Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660517327Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660551225Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660560955Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660569006Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660575267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660583832Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660590460Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660749363Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660824535Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660879957Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 15:34:05 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:05.660893334Z" level=info msg="containerd successfully booted in 0.019227s"
	Jul 19 15:34:06 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:06.665697420Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.076734180Z" level=info msg="Loading containers: start."
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.146934373Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.206613610Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.248236931Z" level=info msg="Loading containers: done."
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.256182205Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.256242797Z" level=info msg="Daemon has completed initialization"
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.279752016Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 15:34:07 NoKubernetes-273000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 15:34:07 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:07.281301748Z" level=info msg="API listen on [::]:2376"
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.334056889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.334313578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.334379679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.334542541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.338899517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.338942081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.338953436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.339011309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.342486966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.342522983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.342533912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.342629009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.343116935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.343188716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.343198632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.343255904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.547832299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.547870855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.547888902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.548009216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.579936289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.579970171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.579983550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.580040842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.578797323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.578908905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.578922204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.578986963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.584982759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.585053665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.585069638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:13 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:13.585153320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.564154473Z" level=info msg="ignoring event" container=4ed4486df69e764ddec977619c61345a70cb86695f02086d6be91925ab014bd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.564256092Z" level=info msg="shim disconnected" id=4ed4486df69e764ddec977619c61345a70cb86695f02086d6be91925ab014bd9 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.564293711Z" level=warning msg="cleaning up after shim disconnected" id=4ed4486df69e764ddec977619c61345a70cb86695f02086d6be91925ab014bd9 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.564299974Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.630391009Z" level=info msg="ignoring event" container=db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.631489458Z" level=info msg="shim disconnected" id=db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.640385339Z" level=warning msg="cleaning up after shim disconnected" id=db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.640419085Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.644933660Z" level=info msg="shim disconnected" id=57cf6ec405518c5f5188d97f60a3e5a4a4ca37402594ba66385d2646cce24290 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.646808419Z" level=info msg="ignoring event" container=57cf6ec405518c5f5188d97f60a3e5a4a4ca37402594ba66385d2646cce24290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.647011743Z" level=info msg="ignoring event" container=77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.647232306Z" level=warning msg="cleaning up after shim disconnected" id=57cf6ec405518c5f5188d97f60a3e5a4a4ca37402594ba66385d2646cce24290 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.647283173Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.660468525Z" level=info msg="shim disconnected" id=77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.662550614Z" level=info msg="ignoring event" container=85e4dbef45d040ea9b7496cc64ab94bee70cf90fc0dad47af0f6e093d4b51130 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.662736525Z" level=info msg="ignoring event" container=d44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.663196296Z" level=info msg="ignoring event" container=ad44cad4f3ca2ae3ab73f720b4147242f3ba9054868aa14b6d51db5dfc00c17f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:21.663257441Z" level=info msg="ignoring event" container=f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.663502675Z" level=warning msg="cleaning up after shim disconnected" id=77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.662058139Z" level=info msg="shim disconnected" id=f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.663740436Z" level=warning msg="cleaning up after shim disconnected" id=f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.663789120Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.660606565Z" level=info msg="shim disconnected" id=ad44cad4f3ca2ae3ab73f720b4147242f3ba9054868aa14b6d51db5dfc00c17f namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.664054088Z" level=warning msg="cleaning up after shim disconnected" id=ad44cad4f3ca2ae3ab73f720b4147242f3ba9054868aa14b6d51db5dfc00c17f namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.664129557Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.663700578Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.660621597Z" level=info msg="shim disconnected" id=85e4dbef45d040ea9b7496cc64ab94bee70cf90fc0dad47af0f6e093d4b51130 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.668582855Z" level=warning msg="cleaning up after shim disconnected" id=85e4dbef45d040ea9b7496cc64ab94bee70cf90fc0dad47af0f6e093d4b51130 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.669955125Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.662041194Z" level=info msg="shim disconnected" id=d44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.673645992Z" level=warning msg="cleaning up after shim disconnected" id=d44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910 namespace=moby
	Jul 19 15:34:21 NoKubernetes-273000 dockerd[1242]: time="2024-07-19T15:34:21.673741494Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 15:34:22 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:22.457276594Z" level=info msg="Processing signal 'terminated'"
	Jul 19 15:34:22 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:22.458173737Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 15:34:22 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:22.458748742Z" level=info msg="Daemon shutdown complete"
	Jul 19 15:34:22 NoKubernetes-273000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 15:34:23 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:23.006491278Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 15:34:23 NoKubernetes-273000 dockerd[1236]: time="2024-07-19T15:34:23.006549663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 15:34:24 NoKubernetes-273000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 15:34:24 NoKubernetes-273000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 15:34:24 NoKubernetes-273000 systemd[1]: docker.service: Consumed 1.156s CPU time.
	Jul 19 15:34:24 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 15:34:24 NoKubernetes-273000 dockerd[2606]: time="2024-07-19T15:34:24.046626973Z" level=info msg="Starting up"
	Jul 19 15:35:24 NoKubernetes-273000 dockerd[2606]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 15:35:24 NoKubernetes-273000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 15:35:24 NoKubernetes-273000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 15:35:24 NoKubernetes-273000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 08:35:23.866732    6435 out.go:239] * 
	W0719 08:35:23.868063    6435 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 08:35:23.930321    6435 out.go:177] 
	
	
	==> Docker <==
	Jul 19 15:35:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:35:24Z" level=error msg="error getting RW layer size for container ID 'db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:35:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:35:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36'"
	Jul 19 15:35:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:35:24Z" level=error msg="error getting RW layer size for container ID 'd44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:35:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:35:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910'"
	Jul 19 15:35:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:35:24Z" level=error msg="error getting RW layer size for container ID '77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:35:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:35:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID '77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0'"
	Jul 19 15:35:24 NoKubernetes-273000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jul 19 15:35:24 NoKubernetes-273000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 15:35:24 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 15:35:24 NoKubernetes-273000 dockerd[2803]: time="2024-07-19T15:35:24.325913887Z" level=info msg="Starting up"
	Jul 19 15:36:24 NoKubernetes-273000 dockerd[2803]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 15:36:24 NoKubernetes-273000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 15:36:24 NoKubernetes-273000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 15:36:24 NoKubernetes-273000 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 15:36:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:36:24Z" level=error msg="error getting RW layer size for container ID 'd44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:36:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:36:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910'"
	Jul 19 15:36:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:36:24Z" level=error msg="error getting RW layer size for container ID 'f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:36:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:36:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a'"
	Jul 19 15:36:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:36:24Z" level=error msg="error getting RW layer size for container ID '77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:36:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:36:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID '77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0'"
	Jul 19 15:36:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:36:24Z" level=error msg="error getting RW layer size for container ID 'db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:36:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:36:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36'"
	Jul 19 15:36:24 NoKubernetes-273000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 19 15:36:24 NoKubernetes-273000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 15:36:24 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-19T15:36:24Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +3.264292] systemd-fstab-generator[512]: Ignoring "noauto" option for root device
	[  +0.093252] systemd-fstab-generator[524]: Ignoring "noauto" option for root device
	[  +1.749455] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[  +0.292831] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.109789] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.107977] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +2.603909] kauditd_printk_skb: 182 callbacks suppressed
	[  +0.283419] systemd-fstab-generator[1087]: Ignoring "noauto" option for root device
	[  +0.095956] systemd-fstab-generator[1099]: Ignoring "noauto" option for root device
	[  +0.103337] systemd-fstab-generator[1111]: Ignoring "noauto" option for root device
	[  +0.120185] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[Jul19 15:34] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.052077] kauditd_printk_skb: 85 callbacks suppressed
	[  +2.847565] systemd-fstab-generator[1472]: Ignoring "noauto" option for root device
	[  +4.904691] systemd-fstab-generator[1670]: Ignoring "noauto" option for root device
	[  +0.054164] kauditd_printk_skb: 70 callbacks suppressed
	[  +4.963072] systemd-fstab-generator[2075]: Ignoring "noauto" option for root device
	[  +0.081650] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.188013] systemd-fstab-generator[2137]: Ignoring "noauto" option for root device
	[  +3.286630] systemd-fstab-generator[2537]: Ignoring "noauto" option for root device
	[  +0.230885] systemd-fstab-generator[2572]: Ignoring "noauto" option for root device
	[  +0.102784] systemd-fstab-generator[2584]: Ignoring "noauto" option for root device
	[  +0.108484] systemd-fstab-generator[2598]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 15:37:24 up 3 min,  0 users,  load average: 0.04, 0.11, 0.06
	Linux NoKubernetes-273000 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.748535    2083 topology_manager.go:215] "Topology Admit Handler" podUID="38905a991faed697c79d359036912659" podNamespace="kube-system" podName="etcd-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.748560    2083 topology_manager.go:215] "Topology Admit Handler" podUID="b1cf1983122fefe442619a5392214cd5" podNamespace="kube-system" podName="kube-apiserver-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813194    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/38905a991faed697c79d359036912659-etcd-data\") pod \"etcd-nokubernetes-273000\" (UID: \"38905a991faed697c79d359036912659\") " pod="kube-system/etcd-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813291    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1cf1983122fefe442619a5392214cd5-ca-certs\") pod \"kube-apiserver-nokubernetes-273000\" (UID: \"b1cf1983122fefe442619a5392214cd5\") " pod="kube-system/kube-apiserver-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813324    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1cf1983122fefe442619a5392214cd5-usr-share-ca-certificates\") pod \"kube-apiserver-nokubernetes-273000\" (UID: \"b1cf1983122fefe442619a5392214cd5\") " pod="kube-system/kube-apiserver-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813352    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e20c5ec4911b4548ca49d7a672bc595-ca-certs\") pod \"kube-controller-manager-nokubernetes-273000\" (UID: \"3e20c5ec4911b4548ca49d7a672bc595\") " pod="kube-system/kube-controller-manager-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813378    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e20c5ec4911b4548ca49d7a672bc595-kubeconfig\") pod \"kube-controller-manager-nokubernetes-273000\" (UID: \"3e20c5ec4911b4548ca49d7a672bc595\") " pod="kube-system/kube-controller-manager-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813428    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e20c5ec4911b4548ca49d7a672bc595-usr-share-ca-certificates\") pod \"kube-controller-manager-nokubernetes-273000\" (UID: \"3e20c5ec4911b4548ca49d7a672bc595\") " pod="kube-system/kube-controller-manager-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813455    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be2735a5b7e8fc4b1ae22e9b18314521-kubeconfig\") pod \"kube-scheduler-nokubernetes-273000\" (UID: \"be2735a5b7e8fc4b1ae22e9b18314521\") " pod="kube-system/kube-scheduler-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813477    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/38905a991faed697c79d359036912659-etcd-certs\") pod \"etcd-nokubernetes-273000\" (UID: \"38905a991faed697c79d359036912659\") " pod="kube-system/etcd-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813507    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1cf1983122fefe442619a5392214cd5-k8s-certs\") pod \"kube-apiserver-nokubernetes-273000\" (UID: \"b1cf1983122fefe442619a5392214cd5\") " pod="kube-system/kube-apiserver-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813542    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e20c5ec4911b4548ca49d7a672bc595-flexvolume-dir\") pod \"kube-controller-manager-nokubernetes-273000\" (UID: \"3e20c5ec4911b4548ca49d7a672bc595\") " pod="kube-system/kube-controller-manager-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813574    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e20c5ec4911b4548ca49d7a672bc595-k8s-certs\") pod \"kube-controller-manager-nokubernetes-273000\" (UID: \"3e20c5ec4911b4548ca49d7a672bc595\") " pod="kube-system/kube-controller-manager-nokubernetes-273000"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.595897    2083 apiserver.go:52] "Watching apiserver"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.611181    2083 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: E0719 15:34:18.705938    2083 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-nokubernetes-273000\" already exists" pod="kube-system/kube-apiserver-nokubernetes-273000"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: E0719 15:34:18.707679    2083 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"etcd-nokubernetes-273000\" already exists" pod="kube-system/etcd-nokubernetes-273000"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.724519    2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-nokubernetes-273000" podStartSLOduration=1.7245054579999999 podStartE2EDuration="1.724505458s" podCreationTimestamp="2024-07-19 15:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 15:34:18.718312154 +0000 UTC m=+1.210407163" watchObservedRunningTime="2024-07-19 15:34:18.724505458 +0000 UTC m=+1.216600467"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.738073    2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-nokubernetes-273000" podStartSLOduration=1.738061965 podStartE2EDuration="1.738061965s" podCreationTimestamp="2024-07-19 15:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 15:34:18.725024053 +0000 UTC m=+1.217119069" watchObservedRunningTime="2024-07-19 15:34:18.738061965 +0000 UTC m=+1.230156969"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.748033    2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-nokubernetes-273000" podStartSLOduration=1.7480206539999998 podStartE2EDuration="1.748020654s" podCreationTimestamp="2024-07-19 15:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 15:34:18.73831162 +0000 UTC m=+1.230406629" watchObservedRunningTime="2024-07-19 15:34:18.748020654 +0000 UTC m=+1.240115656"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.748193    2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-nokubernetes-273000" podStartSLOduration=1.7481878530000001 podStartE2EDuration="1.748187853s" podCreationTimestamp="2024-07-19 15:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 15:34:18.747976117 +0000 UTC m=+1.240071120" watchObservedRunningTime="2024-07-19 15:34:18.748187853 +0000 UTC m=+1.240282856"
	Jul 19 15:34:20 NoKubernetes-273000 kubelet[2083]: I0719 15:34:20.280047    2083 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jul 19 15:34:21 NoKubernetes-273000 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jul 19 15:34:21 NoKubernetes-273000 systemd[1]: kubelet.service: Deactivated successfully.
	Jul 19 15:34:21 NoKubernetes-273000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 08:36:24.184152    6462 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:36:24.196340    6462 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:36:24.207638    6462 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:36:24.219339    6462 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:36:24.232205    6462 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:36:24.245193    6462 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:36:24.257863    6462 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:36:24.269003    6462 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p NoKubernetes-273000 -n NoKubernetes-273000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p NoKubernetes-273000 -n NoKubernetes-273000: exit status 2 (155.290843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "NoKubernetes-273000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (185.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (180.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-273000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-273000 --no-kubernetes --driver=hyperkit : signal: killed (1m12.292028454s)

                                                
                                                
-- stdout --
	* [NoKubernetes-273000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-273000
	* Updating the running hyperkit "NoKubernetes-273000" VM ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-273000 --no-kubernetes --driver=hyperkit " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-273000 -n NoKubernetes-273000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-273000 -n NoKubernetes-273000: exit status 2 (152.986443ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestNoKubernetes/serial/Start FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestNoKubernetes/serial/Start]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-273000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p NoKubernetes-273000 logs -n 25: (1m47.722172189s)
helpers_test.go:252: TestNoKubernetes/serial/Start logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-248000 sudo systemctl                        | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | status kubelet --all --full                          |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo systemctl                        | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | cat kubelet --no-pager                               |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo journalctl                       | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | -xeu kubelet --all --full                            |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo cat                              | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo cat                              | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo systemctl                        | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | status docker --all --full                           |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo systemctl                        | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo cat                              | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo docker                           | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo systemctl                        | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo systemctl                        | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo cat                              | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo cat                              | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo                                  | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo systemctl                        | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo systemctl                        | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo cat                              | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo cat                              | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo containerd                       | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo systemctl                        | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT |                     |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo systemctl                        | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo find                             | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-248000 sudo crio                             | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p auto-248000                                       | auto-248000    | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT | 19 Jul 24 08:37 PDT |
	| start   | -p kindnet-248000                                    | kindnet-248000 | jenkins | v1.33.1 | 19 Jul 24 08:37 PDT |                     |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=kindnet                                        |                |         |         |                     |                     |
	|         | --driver=hyperkit                                    |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 08:37:54
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 08:37:54.501294    6722 out.go:291] Setting OutFile to fd 1 ...
	I0719 08:37:54.538244    6722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 08:37:54.538309    6722 out.go:304] Setting ErrFile to fd 2...
	I0719 08:37:54.538355    6722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 08:37:54.538596    6722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	I0719 08:37:54.540565    6722 out.go:298] Setting JSON to false
	I0719 08:37:54.564802    6722 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5845,"bootTime":1721397629,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0719 08:37:54.564898    6722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 08:37:54.625405    6722 out.go:177] * [kindnet-248000] minikube v1.33.1 on Darwin 14.5
	I0719 08:37:54.669787    6722 notify.go:220] Checking for updates...
	I0719 08:37:54.694780    6722 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 08:37:54.715679    6722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	I0719 08:37:54.737541    6722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 08:37:54.767822    6722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 08:37:54.789688    6722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	I0719 08:37:54.810747    6722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 08:37:54.834037    6722 config.go:182] Loaded profile config "NoKubernetes-273000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0719 08:37:54.834131    6722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 08:37:54.862533    6722 out.go:177] * Using the hyperkit driver based on user configuration
	I0719 08:37:54.904732    6722 start.go:297] selected driver: hyperkit
	I0719 08:37:54.904757    6722 start.go:901] validating driver "hyperkit" against <nil>
	I0719 08:37:54.904778    6722 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 08:37:54.909244    6722 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 08:37:54.909356    6722 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1032/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0719 08:37:54.917579    6722 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0719 08:37:54.921349    6722 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:37:54.921369    6722 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0719 08:37:54.921396    6722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 08:37:54.921591    6722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 08:37:54.921616    6722 cni.go:84] Creating CNI manager for "kindnet"
	I0719 08:37:54.921621    6722 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 08:37:54.921694    6722 start.go:340] cluster config:
	{Name:kindnet-248000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 08:37:54.921771    6722 iso.go:125] acquiring lock: {Name:mkadb9ba7febb03c49d2e1dd7dfa4b91b2759763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 08:37:54.963638    6722 out.go:177] * Starting "kindnet-248000" primary control-plane node in "kindnet-248000" cluster
	I0719 08:37:54.984636    6722 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 08:37:54.984731    6722 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 08:37:54.984758    6722 cache.go:56] Caching tarball of preloaded images
	I0719 08:37:54.984975    6722 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 08:37:54.984998    6722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 08:37:54.985146    6722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/config.json ...
	I0719 08:37:54.985180    6722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/config.json: {Name:mk7c1f710e9b42cc0736695acae0a11cfc293ff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 08:37:54.985898    6722 start.go:360] acquireMachinesLock for kindnet-248000: {Name:mke9fa98f500419c1998c374f8c492543e051339 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 08:37:54.986027    6722 start.go:364] duration metric: took 104.036µs to acquireMachinesLock for "kindnet-248000"
	I0719 08:37:54.986067    6722 start.go:93] Provisioning new machine with config: &{Name:kindnet-248000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:kindnet-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 08:37:54.986164    6722 start.go:125] createHost starting for "" (driver="hyperkit")
	I0719 08:37:55.007650    6722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 08:37:55.007948    6722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:37:55.008008    6722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:37:55.017690    6722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55386
	I0719 08:37:55.018070    6722 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:37:55.018491    6722 main.go:141] libmachine: Using API Version  1
	I0719 08:37:55.018502    6722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:37:55.018754    6722 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:37:55.018869    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetMachineName
	I0719 08:37:55.018973    6722 main.go:141] libmachine: (kindnet-248000) Calling .DriverName
	I0719 08:37:55.019098    6722 start.go:159] libmachine.API.Create for "kindnet-248000" (driver="hyperkit")
	I0719 08:37:55.019124    6722 client.go:168] LocalClient.Create starting
	I0719 08:37:55.019156    6722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca.pem
	I0719 08:37:55.019207    6722 main.go:141] libmachine: Decoding PEM data...
	I0719 08:37:55.019224    6722 main.go:141] libmachine: Parsing certificate...
	I0719 08:37:55.019280    6722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/cert.pem
	I0719 08:37:55.019318    6722 main.go:141] libmachine: Decoding PEM data...
	I0719 08:37:55.019329    6722 main.go:141] libmachine: Parsing certificate...
	I0719 08:37:55.019341    6722 main.go:141] libmachine: Running pre-create checks...
	I0719 08:37:55.019352    6722 main.go:141] libmachine: (kindnet-248000) Calling .PreCreateCheck
	I0719 08:37:55.019433    6722 main.go:141] libmachine: (kindnet-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:37:55.019585    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetConfigRaw
	I0719 08:37:55.029194    6722 main.go:141] libmachine: Creating machine...
	I0719 08:37:55.029218    6722 main.go:141] libmachine: (kindnet-248000) Calling .Create
	I0719 08:37:55.029446    6722 main.go:141] libmachine: (kindnet-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:37:55.029722    6722 main.go:141] libmachine: (kindnet-248000) DBG | I0719 08:37:55.029429    6732 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19302-1032/.minikube
	I0719 08:37:55.029824    6722 main.go:141] libmachine: (kindnet-248000) Downloading /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1032/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 08:37:55.266503    6722 main.go:141] libmachine: (kindnet-248000) DBG | I0719 08:37:55.266373    6732 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/id_rsa...
	I0719 08:37:55.447820    6722 main.go:141] libmachine: (kindnet-248000) DBG | I0719 08:37:55.447722    6732 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/kindnet-248000.rawdisk...
	I0719 08:37:55.447840    6722 main.go:141] libmachine: (kindnet-248000) DBG | Writing magic tar header
	I0719 08:37:55.447849    6722 main.go:141] libmachine: (kindnet-248000) DBG | Writing SSH key tar header
	I0719 08:37:55.448755    6722 main.go:141] libmachine: (kindnet-248000) DBG | I0719 08:37:55.448609    6732 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000 ...
	I0719 08:37:55.807597    6722 main.go:141] libmachine: (kindnet-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:37:55.807615    6722 main.go:141] libmachine: (kindnet-248000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/hyperkit.pid
	I0719 08:37:55.807685    6722 main.go:141] libmachine: (kindnet-248000) DBG | Using UUID c57d7fba-2451-4607-a5d7-1605f4212dec
	I0719 08:37:55.832043    6722 main.go:141] libmachine: (kindnet-248000) DBG | Generated MAC 2a:94:c4:f4:86:33
	I0719 08:37:55.832059    6722 main.go:141] libmachine: (kindnet-248000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kindnet-248000
	I0719 08:37:55.832086    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c57d7fba-2451-4607-a5d7-1605f4212dec", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d01e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0719 08:37:55.832111    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c57d7fba-2451-4607-a5d7-1605f4212dec", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d01e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0719 08:37:55.832155    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c57d7fba-2451-4607-a5d7-1605f4212dec", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/kindnet-248000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/tty,log=/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/bzimage,/Users/jenkins/minikube-integration/19302-1032/.minikube
/machines/kindnet-248000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kindnet-248000"}
	I0719 08:37:55.832208    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c57d7fba-2451-4607-a5d7-1605f4212dec -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/kindnet-248000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/tty,log=/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/console-ring -f kexec,/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/bzimage,/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/initrd,earlyprintk=serial loglevel=3
console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kindnet-248000"
	I0719 08:37:55.832226    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0719 08:37:55.834981    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 DEBUG: hyperkit: Pid is 6733
	I0719 08:37:55.835396    6722 main.go:141] libmachine: (kindnet-248000) DBG | Attempt 0
	I0719 08:37:55.835408    6722 main.go:141] libmachine: (kindnet-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:37:55.835477    6722 main.go:141] libmachine: (kindnet-248000) DBG | hyperkit pid from json: 6733
	I0719 08:37:55.836787    6722 main.go:141] libmachine: (kindnet-248000) DBG | Searching for 2a:94:c4:f4:86:33 in /var/db/dhcpd_leases ...
	I0719 08:37:55.836806    6722 main.go:141] libmachine: (kindnet-248000) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0719 08:37:55.836819    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:4a:e9:8:4c:c5:8e ID:1,4a:e9:8:4c:c5:8e Lease:0x669bd8f1}
	I0719 08:37:55.836833    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:6:55:c6:6b:97:51 ID:1,6:55:c6:6b:97:51 Lease:0x669bd8dd}
	I0719 08:37:55.836840    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:52:4a:b7:af:2c:39 ID:1,52:4a:b7:af:2c:39 Lease:0x669bd867}
	I0719 08:37:55.836848    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:f2:f6:fe:19:a0:b7 ID:1,f2:f6:fe:19:a0:b7 Lease:0x669bd83a}
	I0719 08:37:55.836855    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:e6:70:48:7f:f7:9b ID:1,e6:70:48:7f:f7:9b Lease:0x669bd827}
	I0719 08:37:55.836860    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:c6:8:dc:1:d4:4c ID:1,c6:8:dc:1:d4:4c Lease:0x669bd7a4}
	I0719 08:37:55.836875    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:8a:7a:10:7b:72:f3 ID:1,8a:7a:10:7b:72:f3 Lease:0x669a860b}
	I0719 08:37:55.836889    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:32:7d:86:2b:7c:2b ID:1,32:7d:86:2b:7c:2b Lease:0x669bd6ee}
	I0719 08:37:55.836915    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:ae:f3:22:ac:9c:47 ID:1,ae:f3:22:ac:9c:47 Lease:0x669a85e5}
	I0719 08:37:55.836927    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:fe:93:56:25:dd:60 ID:1,fe:93:56:25:dd:60 Lease:0x669bd6c3}
	I0719 08:37:55.836944    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:de:c8:f0:6b:e:8f ID:1,de:c8:f0:6b:e:8f Lease:0x669bd631}
	I0719 08:37:55.836959    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:d6:33:9a:ea:b7:73 ID:1,d6:33:9a:ea:b7:73 Lease:0x669bd618}
	I0719 08:37:55.836975    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:82:61:e6:c8:bb:f4 ID:1,82:61:e6:c8:bb:f4 Lease:0x669bd5a8}
	I0719 08:37:55.836985    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:f2:52:e8:51:e2:2b ID:1,f2:52:e8:51:e2:2b Lease:0x669bd4c4}
	I0719 08:37:55.836992    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:62:9d:d9:88:d9:f2 ID:1,62:9d:d9:88:d9:f2 Lease:0x669bd47e}
	I0719 08:37:55.837000    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:50:3b:49:a8:90 ID:1,9a:50:3b:49:a8:90 Lease:0x669a828d}
	I0719 08:37:55.837008    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6a:62:79:b6:6:7d ID:1,6a:62:79:b6:6:7d Lease:0x669a81c4}
	I0719 08:37:55.837015    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4a:31:18:5f:f:1b ID:1,4a:31:18:5f:f:1b Lease:0x669bd3aa}
	I0719 08:37:55.837023    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:32:57:cc:c6:b3:14 ID:1,32:57:cc:c6:b3:14 Lease:0x669bd35f}
	I0719 08:37:55.837030    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:47:25:d7:41:72 ID:1,1e:47:25:d7:41:72 Lease:0x669a8017}
	I0719 08:37:55.837037    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:a0:d5:e5:41:c4 ID:1,26:a0:d5:e5:41:c4 Lease:0x669bcfe2}
	I0719 08:37:55.837044    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fe:b5:ae:18:f2:49 ID:1,fe:b5:ae:18:f2:49 Lease:0x669bcfcc}
	I0719 08:37:55.837051    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:fa:a3:e8:cd:a:9e ID:1,fa:a3:e8:cd:a:9e Lease:0x669bcf9a}
	I0719 08:37:55.837061    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b6:c5:51:fb:fc:77 ID:1,b6:c5:51:fb:fc:77 Lease:0x669bcf71}
	I0719 08:37:55.837078    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:20:df:19:e2:af ID:1,1e:20:df:19:e2:af Lease:0x669bcf31}
	I0719 08:37:55.837085    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:89:55:de:b:6d ID:1,9a:89:55:de:b:6d Lease:0x669a7da7}
	I0719 08:37:55.837092    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ca:b9:33:f:ef:b3 ID:1,ca:b9:33:f:ef:b3 Lease:0x669bce2f}
	I0719 08:37:55.837100    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b2:8c:db:de:35:ab ID:1,b2:8c:db:de:35:ab Lease:0x669bce01}
	I0719 08:37:55.837113    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:82:1f:40:de:76:a7 ID:1,82:1f:40:de:76:a7 Lease:0x669a7c0b}
	I0719 08:37:55.837126    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:be:2e:a:f5:ed:81 ID:1,be:2e:a:f5:ed:81 Lease:0x669bcdd9}
	I0719 08:37:55.837133    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:86:37:0:7f:98 ID:1,aa:86:37:0:7f:98 Lease:0x669bcdaf}
	I0719 08:37:55.837141    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:ee:56:45:fd:4d ID:1,46:ee:56:45:fd:4d Lease:0x669bc9f5}
	I0719 08:37:55.837150    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:e2:f9:2a:f8:e3:44 ID:1,e2:f9:2a:f8:e3:44 Lease:0x669a77d3}
	I0719 08:37:55.837160    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:46:ff:34:e9:b3:a4 ID:1,46:ff:34:e9:b3:a4 Lease:0x669bc7bf}
	I0719 08:37:55.842343    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0719 08:37:55.850896    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0719 08:37:55.851701    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 08:37:55.851729    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 08:37:55.851742    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 08:37:55.851772    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 08:37:56.250928    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0719 08:37:56.250942    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0719 08:37:56.365687    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 08:37:56.365703    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 08:37:56.365714    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 08:37:56.365727    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 08:37:56.366634    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0719 08:37:56.366645    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:37:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0719 08:37:57.837810    6722 main.go:141] libmachine: (kindnet-248000) DBG | Attempt 1
	I0719 08:37:57.837826    6722 main.go:141] libmachine: (kindnet-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:37:57.837918    6722 main.go:141] libmachine: (kindnet-248000) DBG | hyperkit pid from json: 6733
	I0719 08:37:57.838815    6722 main.go:141] libmachine: (kindnet-248000) DBG | Searching for 2a:94:c4:f4:86:33 in /var/db/dhcpd_leases ...
	I0719 08:37:57.838901    6722 main.go:141] libmachine: (kindnet-248000) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0719 08:37:57.838911    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:4a:e9:8:4c:c5:8e ID:1,4a:e9:8:4c:c5:8e Lease:0x669bd8f1}
	I0719 08:37:57.838927    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:6:55:c6:6b:97:51 ID:1,6:55:c6:6b:97:51 Lease:0x669bd8dd}
	I0719 08:37:57.838939    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:52:4a:b7:af:2c:39 ID:1,52:4a:b7:af:2c:39 Lease:0x669bd867}
	I0719 08:37:57.838952    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:f2:f6:fe:19:a0:b7 ID:1,f2:f6:fe:19:a0:b7 Lease:0x669bd83a}
	I0719 08:37:57.838963    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:e6:70:48:7f:f7:9b ID:1,e6:70:48:7f:f7:9b Lease:0x669bd827}
	I0719 08:37:57.838972    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:c6:8:dc:1:d4:4c ID:1,c6:8:dc:1:d4:4c Lease:0x669bd7a4}
	I0719 08:37:57.838990    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:8a:7a:10:7b:72:f3 ID:1,8a:7a:10:7b:72:f3 Lease:0x669a860b}
	I0719 08:37:57.838996    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:32:7d:86:2b:7c:2b ID:1,32:7d:86:2b:7c:2b Lease:0x669bd6ee}
	I0719 08:37:57.839002    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:ae:f3:22:ac:9c:47 ID:1,ae:f3:22:ac:9c:47 Lease:0x669a85e5}
	I0719 08:37:57.839011    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:fe:93:56:25:dd:60 ID:1,fe:93:56:25:dd:60 Lease:0x669bd6c3}
	I0719 08:37:57.839018    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:de:c8:f0:6b:e:8f ID:1,de:c8:f0:6b:e:8f Lease:0x669bd631}
	I0719 08:37:57.839024    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:d6:33:9a:ea:b7:73 ID:1,d6:33:9a:ea:b7:73 Lease:0x669bd618}
	I0719 08:37:57.839037    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:82:61:e6:c8:bb:f4 ID:1,82:61:e6:c8:bb:f4 Lease:0x669bd5a8}
	I0719 08:37:57.839052    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:f2:52:e8:51:e2:2b ID:1,f2:52:e8:51:e2:2b Lease:0x669bd4c4}
	I0719 08:37:57.839060    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:62:9d:d9:88:d9:f2 ID:1,62:9d:d9:88:d9:f2 Lease:0x669bd47e}
	I0719 08:37:57.839067    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:50:3b:49:a8:90 ID:1,9a:50:3b:49:a8:90 Lease:0x669a828d}
	I0719 08:37:57.839074    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6a:62:79:b6:6:7d ID:1,6a:62:79:b6:6:7d Lease:0x669a81c4}
	I0719 08:37:57.839082    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4a:31:18:5f:f:1b ID:1,4a:31:18:5f:f:1b Lease:0x669bd3aa}
	I0719 08:37:57.839089    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:32:57:cc:c6:b3:14 ID:1,32:57:cc:c6:b3:14 Lease:0x669bd35f}
	I0719 08:37:57.839095    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:47:25:d7:41:72 ID:1,1e:47:25:d7:41:72 Lease:0x669a8017}
	I0719 08:37:57.839102    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:a0:d5:e5:41:c4 ID:1,26:a0:d5:e5:41:c4 Lease:0x669bcfe2}
	I0719 08:37:57.839110    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fe:b5:ae:18:f2:49 ID:1,fe:b5:ae:18:f2:49 Lease:0x669bcfcc}
	I0719 08:37:57.839118    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:fa:a3:e8:cd:a:9e ID:1,fa:a3:e8:cd:a:9e Lease:0x669bcf9a}
	I0719 08:37:57.839128    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b6:c5:51:fb:fc:77 ID:1,b6:c5:51:fb:fc:77 Lease:0x669bcf71}
	I0719 08:37:57.839136    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:20:df:19:e2:af ID:1,1e:20:df:19:e2:af Lease:0x669bcf31}
	I0719 08:37:57.839143    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:89:55:de:b:6d ID:1,9a:89:55:de:b:6d Lease:0x669a7da7}
	I0719 08:37:57.839151    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ca:b9:33:f:ef:b3 ID:1,ca:b9:33:f:ef:b3 Lease:0x669bce2f}
	I0719 08:37:57.839165    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b2:8c:db:de:35:ab ID:1,b2:8c:db:de:35:ab Lease:0x669bce01}
	I0719 08:37:57.839173    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:82:1f:40:de:76:a7 ID:1,82:1f:40:de:76:a7 Lease:0x669a7c0b}
	I0719 08:37:57.839180    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:be:2e:a:f5:ed:81 ID:1,be:2e:a:f5:ed:81 Lease:0x669bcdd9}
	I0719 08:37:57.839187    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:86:37:0:7f:98 ID:1,aa:86:37:0:7f:98 Lease:0x669bcdaf}
	I0719 08:37:57.839193    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:ee:56:45:fd:4d ID:1,46:ee:56:45:fd:4d Lease:0x669bc9f5}
	I0719 08:37:57.839201    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:e2:f9:2a:f8:e3:44 ID:1,e2:f9:2a:f8:e3:44 Lease:0x669a77d3}
	I0719 08:37:57.839209    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:46:ff:34:e9:b3:a4 ID:1,46:ff:34:e9:b3:a4 Lease:0x669bc7bf}
	I0719 08:37:59.839401    6722 main.go:141] libmachine: (kindnet-248000) DBG | Attempt 2
	I0719 08:37:59.839430    6722 main.go:141] libmachine: (kindnet-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:37:59.839532    6722 main.go:141] libmachine: (kindnet-248000) DBG | hyperkit pid from json: 6733
	I0719 08:37:59.840483    6722 main.go:141] libmachine: (kindnet-248000) DBG | Searching for 2a:94:c4:f4:86:33 in /var/db/dhcpd_leases ...
	I0719 08:37:59.840550    6722 main.go:141] libmachine: (kindnet-248000) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0719 08:37:59.840563    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:4a:e9:8:4c:c5:8e ID:1,4a:e9:8:4c:c5:8e Lease:0x669bd8f1}
	I0719 08:37:59.840572    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:6:55:c6:6b:97:51 ID:1,6:55:c6:6b:97:51 Lease:0x669bd8dd}
	I0719 08:37:59.840580    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:52:4a:b7:af:2c:39 ID:1,52:4a:b7:af:2c:39 Lease:0x669bd867}
	I0719 08:37:59.840590    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:f2:f6:fe:19:a0:b7 ID:1,f2:f6:fe:19:a0:b7 Lease:0x669bd83a}
	I0719 08:37:59.840603    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:e6:70:48:7f:f7:9b ID:1,e6:70:48:7f:f7:9b Lease:0x669bd827}
	I0719 08:37:59.840610    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:c6:8:dc:1:d4:4c ID:1,c6:8:dc:1:d4:4c Lease:0x669bd7a4}
	I0719 08:37:59.840622    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:8a:7a:10:7b:72:f3 ID:1,8a:7a:10:7b:72:f3 Lease:0x669a860b}
	I0719 08:37:59.840631    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:32:7d:86:2b:7c:2b ID:1,32:7d:86:2b:7c:2b Lease:0x669bd6ee}
	I0719 08:37:59.840638    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:ae:f3:22:ac:9c:47 ID:1,ae:f3:22:ac:9c:47 Lease:0x669a85e5}
	I0719 08:37:59.840645    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:fe:93:56:25:dd:60 ID:1,fe:93:56:25:dd:60 Lease:0x669bd6c3}
	I0719 08:37:59.840674    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:de:c8:f0:6b:e:8f ID:1,de:c8:f0:6b:e:8f Lease:0x669bd631}
	I0719 08:37:59.840685    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:d6:33:9a:ea:b7:73 ID:1,d6:33:9a:ea:b7:73 Lease:0x669bd618}
	I0719 08:37:59.840691    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:82:61:e6:c8:bb:f4 ID:1,82:61:e6:c8:bb:f4 Lease:0x669bd5a8}
	I0719 08:37:59.840704    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:f2:52:e8:51:e2:2b ID:1,f2:52:e8:51:e2:2b Lease:0x669bd4c4}
	I0719 08:37:59.840716    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:62:9d:d9:88:d9:f2 ID:1,62:9d:d9:88:d9:f2 Lease:0x669bd47e}
	I0719 08:37:59.840747    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:50:3b:49:a8:90 ID:1,9a:50:3b:49:a8:90 Lease:0x669a828d}
	I0719 08:37:59.840782    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6a:62:79:b6:6:7d ID:1,6a:62:79:b6:6:7d Lease:0x669a81c4}
	I0719 08:37:59.840793    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4a:31:18:5f:f:1b ID:1,4a:31:18:5f:f:1b Lease:0x669bd3aa}
	I0719 08:37:59.840801    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:32:57:cc:c6:b3:14 ID:1,32:57:cc:c6:b3:14 Lease:0x669bd35f}
	I0719 08:37:59.840812    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:47:25:d7:41:72 ID:1,1e:47:25:d7:41:72 Lease:0x669a8017}
	I0719 08:37:59.840830    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:a0:d5:e5:41:c4 ID:1,26:a0:d5:e5:41:c4 Lease:0x669bcfe2}
	I0719 08:37:59.840838    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fe:b5:ae:18:f2:49 ID:1,fe:b5:ae:18:f2:49 Lease:0x669bcfcc}
	I0719 08:37:59.840847    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:fa:a3:e8:cd:a:9e ID:1,fa:a3:e8:cd:a:9e Lease:0x669bcf9a}
	I0719 08:37:59.840857    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b6:c5:51:fb:fc:77 ID:1,b6:c5:51:fb:fc:77 Lease:0x669bcf71}
	I0719 08:37:59.840865    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:20:df:19:e2:af ID:1,1e:20:df:19:e2:af Lease:0x669bcf31}
	I0719 08:37:59.840872    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:89:55:de:b:6d ID:1,9a:89:55:de:b:6d Lease:0x669a7da7}
	I0719 08:37:59.840879    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ca:b9:33:f:ef:b3 ID:1,ca:b9:33:f:ef:b3 Lease:0x669bce2f}
	I0719 08:37:59.840892    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b2:8c:db:de:35:ab ID:1,b2:8c:db:de:35:ab Lease:0x669bce01}
	I0719 08:37:59.840900    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:82:1f:40:de:76:a7 ID:1,82:1f:40:de:76:a7 Lease:0x669a7c0b}
	I0719 08:37:59.840906    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:be:2e:a:f5:ed:81 ID:1,be:2e:a:f5:ed:81 Lease:0x669bcdd9}
	I0719 08:37:59.840919    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:86:37:0:7f:98 ID:1,aa:86:37:0:7f:98 Lease:0x669bcdaf}
	I0719 08:37:59.840929    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:ee:56:45:fd:4d ID:1,46:ee:56:45:fd:4d Lease:0x669bc9f5}
	I0719 08:37:59.840937    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:e2:f9:2a:f8:e3:44 ID:1,e2:f9:2a:f8:e3:44 Lease:0x669a77d3}
	I0719 08:37:59.840945    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:46:ff:34:e9:b3:a4 ID:1,46:ff:34:e9:b3:a4 Lease:0x669bc7bf}
	I0719 08:38:01.629921    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:38:01 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0719 08:38:01.629949    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:38:01 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0719 08:38:01.629959    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:38:01 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0719 08:38:01.653843    6722 main.go:141] libmachine: (kindnet-248000) DBG | 2024/07/19 08:38:01 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0719 08:38:01.841797    6722 main.go:141] libmachine: (kindnet-248000) DBG | Attempt 3
	I0719 08:38:01.841831    6722 main.go:141] libmachine: (kindnet-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:38:01.841946    6722 main.go:141] libmachine: (kindnet-248000) DBG | hyperkit pid from json: 6733
	I0719 08:38:01.843528    6722 main.go:141] libmachine: (kindnet-248000) DBG | Searching for 2a:94:c4:f4:86:33 in /var/db/dhcpd_leases ...
	I0719 08:38:01.843661    6722 main.go:141] libmachine: (kindnet-248000) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0719 08:38:01.843687    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:4a:e9:8:4c:c5:8e ID:1,4a:e9:8:4c:c5:8e Lease:0x669bd8f1}
	I0719 08:38:01.843726    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:6:55:c6:6b:97:51 ID:1,6:55:c6:6b:97:51 Lease:0x669bd8dd}
	I0719 08:38:01.843736    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:52:4a:b7:af:2c:39 ID:1,52:4a:b7:af:2c:39 Lease:0x669bd867}
	I0719 08:38:01.843760    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:f2:f6:fe:19:a0:b7 ID:1,f2:f6:fe:19:a0:b7 Lease:0x669bd83a}
	I0719 08:38:01.843782    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:e6:70:48:7f:f7:9b ID:1,e6:70:48:7f:f7:9b Lease:0x669bd827}
	I0719 08:38:01.843792    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:c6:8:dc:1:d4:4c ID:1,c6:8:dc:1:d4:4c Lease:0x669bd7a4}
	I0719 08:38:01.843801    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:8a:7a:10:7b:72:f3 ID:1,8a:7a:10:7b:72:f3 Lease:0x669a860b}
	I0719 08:38:01.843810    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:32:7d:86:2b:7c:2b ID:1,32:7d:86:2b:7c:2b Lease:0x669bd6ee}
	I0719 08:38:01.843823    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:ae:f3:22:ac:9c:47 ID:1,ae:f3:22:ac:9c:47 Lease:0x669a85e5}
	I0719 08:38:01.843843    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:fe:93:56:25:dd:60 ID:1,fe:93:56:25:dd:60 Lease:0x669bd6c3}
	I0719 08:38:01.843859    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:de:c8:f0:6b:e:8f ID:1,de:c8:f0:6b:e:8f Lease:0x669bd631}
	I0719 08:38:01.843871    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:d6:33:9a:ea:b7:73 ID:1,d6:33:9a:ea:b7:73 Lease:0x669bd618}
	I0719 08:38:01.843882    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:82:61:e6:c8:bb:f4 ID:1,82:61:e6:c8:bb:f4 Lease:0x669bd5a8}
	I0719 08:38:01.843892    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:f2:52:e8:51:e2:2b ID:1,f2:52:e8:51:e2:2b Lease:0x669bd4c4}
	I0719 08:38:01.843901    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:62:9d:d9:88:d9:f2 ID:1,62:9d:d9:88:d9:f2 Lease:0x669bd47e}
	I0719 08:38:01.843922    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:50:3b:49:a8:90 ID:1,9a:50:3b:49:a8:90 Lease:0x669a828d}
	I0719 08:38:01.843933    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6a:62:79:b6:6:7d ID:1,6a:62:79:b6:6:7d Lease:0x669a81c4}
	I0719 08:38:01.843948    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4a:31:18:5f:f:1b ID:1,4a:31:18:5f:f:1b Lease:0x669bd3aa}
	I0719 08:38:01.843963    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:32:57:cc:c6:b3:14 ID:1,32:57:cc:c6:b3:14 Lease:0x669bd35f}
	I0719 08:38:01.843985    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:47:25:d7:41:72 ID:1,1e:47:25:d7:41:72 Lease:0x669a8017}
	I0719 08:38:01.844000    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:a0:d5:e5:41:c4 ID:1,26:a0:d5:e5:41:c4 Lease:0x669bcfe2}
	I0719 08:38:01.844012    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fe:b5:ae:18:f2:49 ID:1,fe:b5:ae:18:f2:49 Lease:0x669bcfcc}
	I0719 08:38:01.844023    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:fa:a3:e8:cd:a:9e ID:1,fa:a3:e8:cd:a:9e Lease:0x669bcf9a}
	I0719 08:38:01.844045    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b6:c5:51:fb:fc:77 ID:1,b6:c5:51:fb:fc:77 Lease:0x669bcf71}
	I0719 08:38:01.844062    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:20:df:19:e2:af ID:1,1e:20:df:19:e2:af Lease:0x669bcf31}
	I0719 08:38:01.844072    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:89:55:de:b:6d ID:1,9a:89:55:de:b:6d Lease:0x669a7da7}
	I0719 08:38:01.844093    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ca:b9:33:f:ef:b3 ID:1,ca:b9:33:f:ef:b3 Lease:0x669bce2f}
	I0719 08:38:01.844102    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b2:8c:db:de:35:ab ID:1,b2:8c:db:de:35:ab Lease:0x669bce01}
	I0719 08:38:01.844116    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:82:1f:40:de:76:a7 ID:1,82:1f:40:de:76:a7 Lease:0x669a7c0b}
	I0719 08:38:01.844126    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:be:2e:a:f5:ed:81 ID:1,be:2e:a:f5:ed:81 Lease:0x669bcdd9}
	I0719 08:38:01.844134    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:86:37:0:7f:98 ID:1,aa:86:37:0:7f:98 Lease:0x669bcdaf}
	I0719 08:38:01.844144    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:ee:56:45:fd:4d ID:1,46:ee:56:45:fd:4d Lease:0x669bc9f5}
	I0719 08:38:01.844154    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:e2:f9:2a:f8:e3:44 ID:1,e2:f9:2a:f8:e3:44 Lease:0x669a77d3}
	I0719 08:38:01.844165    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:46:ff:34:e9:b3:a4 ID:1,46:ff:34:e9:b3:a4 Lease:0x669bc7bf}
	I0719 08:38:03.844901    6722 main.go:141] libmachine: (kindnet-248000) DBG | Attempt 4
	I0719 08:38:03.844917    6722 main.go:141] libmachine: (kindnet-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:38:03.844955    6722 main.go:141] libmachine: (kindnet-248000) DBG | hyperkit pid from json: 6733
	I0719 08:38:03.845879    6722 main.go:141] libmachine: (kindnet-248000) DBG | Searching for 2a:94:c4:f4:86:33 in /var/db/dhcpd_leases ...
	I0719 08:38:03.845948    6722 main.go:141] libmachine: (kindnet-248000) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0719 08:38:03.845956    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:4a:e9:8:4c:c5:8e ID:1,4a:e9:8:4c:c5:8e Lease:0x669bd8f1}
	I0719 08:38:03.845967    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:6:55:c6:6b:97:51 ID:1,6:55:c6:6b:97:51 Lease:0x669bd8dd}
	I0719 08:38:03.845982    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:52:4a:b7:af:2c:39 ID:1,52:4a:b7:af:2c:39 Lease:0x669bd867}
	I0719 08:38:03.846001    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:f2:f6:fe:19:a0:b7 ID:1,f2:f6:fe:19:a0:b7 Lease:0x669bd83a}
	I0719 08:38:03.846015    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:e6:70:48:7f:f7:9b ID:1,e6:70:48:7f:f7:9b Lease:0x669bd827}
	I0719 08:38:03.846028    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:c6:8:dc:1:d4:4c ID:1,c6:8:dc:1:d4:4c Lease:0x669bd7a4}
	I0719 08:38:03.846041    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:8a:7a:10:7b:72:f3 ID:1,8a:7a:10:7b:72:f3 Lease:0x669a860b}
	I0719 08:38:03.846050    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:32:7d:86:2b:7c:2b ID:1,32:7d:86:2b:7c:2b Lease:0x669bd6ee}
	I0719 08:38:03.846057    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:ae:f3:22:ac:9c:47 ID:1,ae:f3:22:ac:9c:47 Lease:0x669a85e5}
	I0719 08:38:03.846076    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:fe:93:56:25:dd:60 ID:1,fe:93:56:25:dd:60 Lease:0x669bd6c3}
	I0719 08:38:03.846086    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:de:c8:f0:6b:e:8f ID:1,de:c8:f0:6b:e:8f Lease:0x669bd631}
	I0719 08:38:03.846094    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:d6:33:9a:ea:b7:73 ID:1,d6:33:9a:ea:b7:73 Lease:0x669bd618}
	I0719 08:38:03.846101    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:82:61:e6:c8:bb:f4 ID:1,82:61:e6:c8:bb:f4 Lease:0x669bd5a8}
	I0719 08:38:03.846108    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:f2:52:e8:51:e2:2b ID:1,f2:52:e8:51:e2:2b Lease:0x669bd4c4}
	I0719 08:38:03.846115    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:62:9d:d9:88:d9:f2 ID:1,62:9d:d9:88:d9:f2 Lease:0x669bd47e}
	I0719 08:38:03.846121    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:50:3b:49:a8:90 ID:1,9a:50:3b:49:a8:90 Lease:0x669a828d}
	I0719 08:38:03.846129    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6a:62:79:b6:6:7d ID:1,6a:62:79:b6:6:7d Lease:0x669a81c4}
	I0719 08:38:03.846135    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4a:31:18:5f:f:1b ID:1,4a:31:18:5f:f:1b Lease:0x669bd3aa}
	I0719 08:38:03.846153    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:32:57:cc:c6:b3:14 ID:1,32:57:cc:c6:b3:14 Lease:0x669bd35f}
	I0719 08:38:03.846161    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:47:25:d7:41:72 ID:1,1e:47:25:d7:41:72 Lease:0x669a8017}
	I0719 08:38:03.846167    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:26:a0:d5:e5:41:c4 ID:1,26:a0:d5:e5:41:c4 Lease:0x669bcfe2}
	I0719 08:38:03.846174    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:fe:b5:ae:18:f2:49 ID:1,fe:b5:ae:18:f2:49 Lease:0x669bcfcc}
	I0719 08:38:03.846181    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:fa:a3:e8:cd:a:9e ID:1,fa:a3:e8:cd:a:9e Lease:0x669bcf9a}
	I0719 08:38:03.846194    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b6:c5:51:fb:fc:77 ID:1,b6:c5:51:fb:fc:77 Lease:0x669bcf71}
	I0719 08:38:03.846202    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:1e:20:df:19:e2:af ID:1,1e:20:df:19:e2:af Lease:0x669bcf31}
	I0719 08:38:03.846209    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:89:55:de:b:6d ID:1,9a:89:55:de:b:6d Lease:0x669a7da7}
	I0719 08:38:03.846216    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ca:b9:33:f:ef:b3 ID:1,ca:b9:33:f:ef:b3 Lease:0x669bce2f}
	I0719 08:38:03.846235    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:b2:8c:db:de:35:ab ID:1,b2:8c:db:de:35:ab Lease:0x669bce01}
	I0719 08:38:03.846249    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:82:1f:40:de:76:a7 ID:1,82:1f:40:de:76:a7 Lease:0x669a7c0b}
	I0719 08:38:03.846264    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:be:2e:a:f5:ed:81 ID:1,be:2e:a:f5:ed:81 Lease:0x669bcdd9}
	I0719 08:38:03.846273    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:86:37:0:7f:98 ID:1,aa:86:37:0:7f:98 Lease:0x669bcdaf}
	I0719 08:38:03.846280    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:ee:56:45:fd:4d ID:1,46:ee:56:45:fd:4d Lease:0x669bc9f5}
	I0719 08:38:03.846286    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:e2:f9:2a:f8:e3:44 ID:1,e2:f9:2a:f8:e3:44 Lease:0x669a77d3}
	I0719 08:38:03.846304    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:46:ff:34:e9:b3:a4 ID:1,46:ff:34:e9:b3:a4 Lease:0x669bc7bf}
	I0719 08:38:05.847205    6722 main.go:141] libmachine: (kindnet-248000) DBG | Attempt 5
	I0719 08:38:05.847238    6722 main.go:141] libmachine: (kindnet-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:38:05.847443    6722 main.go:141] libmachine: (kindnet-248000) DBG | hyperkit pid from json: 6733
	I0719 08:38:05.849005    6722 main.go:141] libmachine: (kindnet-248000) DBG | Searching for 2a:94:c4:f4:86:33 in /var/db/dhcpd_leases ...
	I0719 08:38:05.849189    6722 main.go:141] libmachine: (kindnet-248000) DBG | Found 35 entries in /var/db/dhcpd_leases!
	I0719 08:38:05.849204    6722 main.go:141] libmachine: (kindnet-248000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2a:94:c4:f4:86:33 ID:1,2a:94:c4:f4:86:33 Lease:0x669bd9dc}
	I0719 08:38:05.849212    6722 main.go:141] libmachine: (kindnet-248000) DBG | Found match: 2a:94:c4:f4:86:33
	I0719 08:38:05.849218    6722 main.go:141] libmachine: (kindnet-248000) DBG | IP: 192.169.0.36
	I0719 08:38:05.849261    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetConfigRaw
	I0719 08:38:05.850059    6722 main.go:141] libmachine: (kindnet-248000) Calling .DriverName
	I0719 08:38:05.850197    6722 main.go:141] libmachine: (kindnet-248000) Calling .DriverName
	I0719 08:38:05.850312    6722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 08:38:05.850322    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetState
	I0719 08:38:05.850437    6722 main.go:141] libmachine: (kindnet-248000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:38:05.850512    6722 main.go:141] libmachine: (kindnet-248000) DBG | hyperkit pid from json: 6733
	I0719 08:38:05.851654    6722 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 08:38:05.851688    6722 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 08:38:05.851696    6722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 08:38:05.851701    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:05.851789    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:05.851892    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:05.851996    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:05.852102    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:05.852221    6722 main.go:141] libmachine: Using SSH client type: native
	I0719 08:38:05.852409    6722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111520c0] 0x11154e20 <nil>  [] 0s} 192.169.0.36 22 <nil> <nil>}
	I0719 08:38:05.852417    6722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 08:38:05.872032    6722 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0719 08:38:08.936832    6722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 08:38:08.936845    6722 main.go:141] libmachine: Detecting the provisioner...
	I0719 08:38:08.936851    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:08.936989    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:08.937091    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:08.937194    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:08.937296    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:08.937432    6722 main.go:141] libmachine: Using SSH client type: native
	I0719 08:38:08.937570    6722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111520c0] 0x11154e20 <nil>  [] 0s} 192.169.0.36 22 <nil> <nil>}
	I0719 08:38:08.937577    6722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 08:38:08.998522    6722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 08:38:08.998591    6722 main.go:141] libmachine: found compatible host: buildroot
	I0719 08:38:08.998597    6722 main.go:141] libmachine: Provisioning with buildroot...
	I0719 08:38:08.998602    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetMachineName
	I0719 08:38:08.998732    6722 buildroot.go:166] provisioning hostname "kindnet-248000"
	I0719 08:38:08.998743    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetMachineName
	I0719 08:38:08.998835    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:08.998919    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:08.999031    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:08.999130    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:08.999247    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:08.999394    6722 main.go:141] libmachine: Using SSH client type: native
	I0719 08:38:08.999539    6722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111520c0] 0x11154e20 <nil>  [] 0s} 192.169.0.36 22 <nil> <nil>}
	I0719 08:38:08.999548    6722 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-248000 && echo "kindnet-248000" | sudo tee /etc/hostname
	I0719 08:38:09.072074    6722 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-248000
	
	I0719 08:38:09.072094    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:09.072223    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:09.072317    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:09.072407    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:09.072494    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:09.072613    6722 main.go:141] libmachine: Using SSH client type: native
	I0719 08:38:09.072761    6722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111520c0] 0x11154e20 <nil>  [] 0s} 192.169.0.36 22 <nil> <nil>}
	I0719 08:38:09.072772    6722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-248000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-248000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-248000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 08:38:09.141543    6722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 08:38:09.141561    6722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1032/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1032/.minikube}
	I0719 08:38:09.141574    6722 buildroot.go:174] setting up certificates
	I0719 08:38:09.141584    6722 provision.go:84] configureAuth start
	I0719 08:38:09.141593    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetMachineName
	I0719 08:38:09.141740    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetIP
	I0719 08:38:09.141838    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:09.141936    6722 provision.go:143] copyHostCerts
	I0719 08:38:09.142030    6722 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1032/.minikube/key.pem, removing ...
	I0719 08:38:09.142040    6722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1032/.minikube/key.pem
	I0719 08:38:09.142190    6722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1032/.minikube/key.pem (1679 bytes)
	I0719 08:38:09.142430    6722 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1032/.minikube/ca.pem, removing ...
	I0719 08:38:09.142436    6722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1032/.minikube/ca.pem
	I0719 08:38:09.142517    6722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1032/.minikube/ca.pem (1082 bytes)
	I0719 08:38:09.142697    6722 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1032/.minikube/cert.pem, removing ...
	I0719 08:38:09.142709    6722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1032/.minikube/cert.pem
	I0719 08:38:09.142795    6722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1032/.minikube/cert.pem (1123 bytes)
	I0719 08:38:09.142942    6722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca-key.pem org=jenkins.kindnet-248000 san=[127.0.0.1 192.169.0.36 kindnet-248000 localhost minikube]
	I0719 08:38:09.349125    6722 provision.go:177] copyRemoteCerts
	I0719 08:38:09.349177    6722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 08:38:09.349196    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:09.349338    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:09.349457    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:09.349565    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:09.349664    6722 sshutil.go:53] new ssh client: &{IP:192.169.0.36 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/id_rsa Username:docker}
	I0719 08:38:09.388172    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 08:38:09.407128    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0719 08:38:09.426073    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 08:38:09.445007    6722 provision.go:87] duration metric: took 303.41443ms to configureAuth
	I0719 08:38:09.445021    6722 buildroot.go:189] setting minikube options for container-runtime
	I0719 08:38:09.445148    6722 config.go:182] Loaded profile config "kindnet-248000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 08:38:09.445161    6722 main.go:141] libmachine: (kindnet-248000) Calling .DriverName
	I0719 08:38:09.445290    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:09.445396    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:09.445510    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:09.445603    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:09.445679    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:09.445787    6722 main.go:141] libmachine: Using SSH client type: native
	I0719 08:38:09.445913    6722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111520c0] 0x11154e20 <nil>  [] 0s} 192.169.0.36 22 <nil> <nil>}
	I0719 08:38:09.445922    6722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 08:38:09.508873    6722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 08:38:09.508892    6722 buildroot.go:70] root file system type: tmpfs
	I0719 08:38:09.508978    6722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 08:38:09.508992    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:09.509119    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:09.509206    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:09.509301    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:09.509392    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:09.509538    6722 main.go:141] libmachine: Using SSH client type: native
	I0719 08:38:09.509683    6722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111520c0] 0x11154e20 <nil>  [] 0s} 192.169.0.36 22 <nil> <nil>}
	I0719 08:38:09.509730    6722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 08:38:09.592041    6722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 08:38:09.592061    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:09.592199    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:09.592302    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:09.592378    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:09.592484    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:09.592617    6722 main.go:141] libmachine: Using SSH client type: native
	I0719 08:38:09.592765    6722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111520c0] 0x11154e20 <nil>  [] 0s} 192.169.0.36 22 <nil> <nil>}
	I0719 08:38:09.592778    6722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 08:38:11.180225    6722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 08:38:11.180240    6722 main.go:141] libmachine: Checking connection to Docker...
	I0719 08:38:11.180247    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetURL
	I0719 08:38:11.180388    6722 main.go:141] libmachine: Docker is up and running!
	I0719 08:38:11.180397    6722 main.go:141] libmachine: Reticulating splines...
	I0719 08:38:11.180402    6722 client.go:171] duration metric: took 16.161502822s to LocalClient.Create
	I0719 08:38:11.180413    6722 start.go:167] duration metric: took 16.161547409s to libmachine.API.Create "kindnet-248000"
	I0719 08:38:11.180425    6722 start.go:293] postStartSetup for "kindnet-248000" (driver="hyperkit")
	I0719 08:38:11.180432    6722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 08:38:11.180442    6722 main.go:141] libmachine: (kindnet-248000) Calling .DriverName
	I0719 08:38:11.180591    6722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 08:38:11.180602    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:11.180692    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:11.180784    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:11.180866    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:11.180949    6722 sshutil.go:53] new ssh client: &{IP:192.169.0.36 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/id_rsa Username:docker}
	I0719 08:38:11.219329    6722 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 08:38:11.222560    6722 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 08:38:11.222575    6722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1032/.minikube/addons for local assets ...
	I0719 08:38:11.222688    6722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1032/.minikube/files for local assets ...
	I0719 08:38:11.222881    6722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1032/.minikube/files/etc/ssl/certs/15602.pem -> 15602.pem in /etc/ssl/certs
	I0719 08:38:11.223090    6722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 08:38:11.230284    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/files/etc/ssl/certs/15602.pem --> /etc/ssl/certs/15602.pem (1708 bytes)
	I0719 08:38:11.250554    6722 start.go:296] duration metric: took 70.121656ms for postStartSetup
	I0719 08:38:11.250585    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetConfigRaw
	I0719 08:38:11.251226    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetIP
	I0719 08:38:11.251378    6722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/config.json ...
	I0719 08:38:11.251870    6722 start.go:128] duration metric: took 16.265922421s to createHost
	I0719 08:38:11.251887    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:11.251987    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:11.252092    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:11.252193    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:11.252279    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:11.252393    6722 main.go:141] libmachine: Using SSH client type: native
	I0719 08:38:11.252516    6722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111520c0] 0x11154e20 <nil>  [] 0s} 192.169.0.36 22 <nil> <nil>}
	I0719 08:38:11.252523    6722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 08:38:11.316029    6722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721403491.461538103
	
	I0719 08:38:11.316041    6722 fix.go:216] guest clock: 1721403491.461538103
	I0719 08:38:11.316053    6722 fix.go:229] Guest: 2024-07-19 08:38:11.461538103 -0700 PDT Remote: 2024-07-19 08:38:11.251879 -0700 PDT m=+16.788438375 (delta=209.659103ms)
	I0719 08:38:11.316073    6722 fix.go:200] guest clock delta is within tolerance: 209.659103ms
	I0719 08:38:11.316078    6722 start.go:83] releasing machines lock for "kindnet-248000", held for 16.330272601s
	I0719 08:38:11.316096    6722 main.go:141] libmachine: (kindnet-248000) Calling .DriverName
	I0719 08:38:11.316225    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetIP
	I0719 08:38:11.316327    6722 main.go:141] libmachine: (kindnet-248000) Calling .DriverName
	I0719 08:38:11.316622    6722 main.go:141] libmachine: (kindnet-248000) Calling .DriverName
	I0719 08:38:11.316733    6722 main.go:141] libmachine: (kindnet-248000) Calling .DriverName
	I0719 08:38:11.316852    6722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 08:38:11.316881    6722 ssh_runner.go:195] Run: cat /version.json
	I0719 08:38:11.316884    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:11.316891    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHHostname
	I0719 08:38:11.316985    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:11.316999    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHPort
	I0719 08:38:11.317077    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:11.317089    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHKeyPath
	I0719 08:38:11.317183    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:11.317205    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetSSHUsername
	I0719 08:38:11.317273    6722 sshutil.go:53] new ssh client: &{IP:192.169.0.36 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/id_rsa Username:docker}
	I0719 08:38:11.317287    6722 sshutil.go:53] new ssh client: &{IP:192.169.0.36 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/kindnet-248000/id_rsa Username:docker}
	I0719 08:38:11.398587    6722 ssh_runner.go:195] Run: systemctl --version
	I0719 08:38:11.403704    6722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 08:38:11.408385    6722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 08:38:11.408450    6722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 08:38:11.421832    6722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 08:38:11.421845    6722 start.go:495] detecting cgroup driver to use...
	I0719 08:38:11.421948    6722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 08:38:11.436840    6722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 08:38:11.446311    6722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 08:38:11.455288    6722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 08:38:11.455338    6722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 08:38:11.464924    6722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 08:38:11.473995    6722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 08:38:11.482764    6722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 08:38:11.491612    6722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 08:38:11.500658    6722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 08:38:11.509718    6722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 08:38:11.518640    6722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 08:38:11.528366    6722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 08:38:11.536428    6722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 08:38:11.544589    6722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 08:38:11.639172    6722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 08:38:11.662098    6722 start.go:495] detecting cgroup driver to use...
	I0719 08:38:11.662176    6722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 08:38:11.675127    6722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 08:38:11.686912    6722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 08:38:11.703769    6722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 08:38:11.717999    6722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 08:38:11.732386    6722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 08:38:11.806590    6722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 08:38:11.818036    6722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 08:38:11.833087    6722 ssh_runner.go:195] Run: which cri-dockerd
	I0719 08:38:11.835953    6722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 08:38:11.843806    6722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 08:38:11.857025    6722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 08:38:11.954475    6722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 08:38:12.058452    6722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 08:38:12.058526    6722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 08:38:12.073308    6722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 08:38:12.182306    6722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 08:38:14.511149    6722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.328857366s)
	I0719 08:38:14.511208    6722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 08:38:14.521889    6722 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 08:38:14.535189    6722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 08:38:14.545477    6722 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 08:38:14.640128    6722 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 08:38:14.737008    6722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 08:38:14.841344    6722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 08:38:14.853645    6722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 08:38:14.864699    6722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 08:38:14.962030    6722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 08:38:15.017842    6722 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 08:38:15.017923    6722 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 08:38:15.022325    6722 start.go:563] Will wait 60s for crictl version
	I0719 08:38:15.022370    6722 ssh_runner.go:195] Run: which crictl
	I0719 08:38:15.025252    6722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 08:38:15.054667    6722 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 08:38:15.054736    6722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 08:38:15.072888    6722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 08:38:15.116492    6722 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 08:38:15.116553    6722 main.go:141] libmachine: (kindnet-248000) Calling .GetIP
	I0719 08:38:15.116924    6722 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0719 08:38:15.121472    6722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 08:38:15.130952    6722 kubeadm.go:883] updating cluster {Name:kindnet-248000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:kindnet-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.169.0.36 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 08:38:15.131023    6722 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 08:38:15.131083    6722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 08:38:15.145567    6722 docker.go:685] Got preloaded images: 
	I0719 08:38:15.145580    6722 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0719 08:38:15.145639    6722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 08:38:15.153287    6722 ssh_runner.go:195] Run: which lz4
	I0719 08:38:15.156290    6722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 08:38:15.159365    6722 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 08:38:15.159380    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0719 08:38:16.243979    6722 docker.go:649] duration metric: took 1.087751995s to copy over tarball
	I0719 08:38:16.244042    6722 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 08:38:19.118093    6722 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.874075877s)
	I0719 08:38:19.118107    6722 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 08:38:19.144227    6722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 08:38:19.151935    6722 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0719 08:38:19.165701    6722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 08:38:19.268166    6722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 08:38:21.731653    6722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.46349855s)
	I0719 08:38:21.731780    6722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 08:38:21.745725    6722 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 08:38:21.745745    6722 cache_images.go:84] Images are preloaded, skipping loading
	I0719 08:38:21.745753    6722 kubeadm.go:934] updating node { 192.169.0.36 8443 v1.30.3 docker true true} ...
	I0719 08:38:21.745834    6722 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-248000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:kindnet-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0719 08:38:21.745901    6722 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 08:38:21.765947    6722 cni.go:84] Creating CNI manager for "kindnet"
	I0719 08:38:21.765967    6722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 08:38:21.765983    6722 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.36 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-248000 NodeName:kindnet-248000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 08:38:21.766075    6722 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kindnet-248000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 08:38:21.766133    6722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 08:38:21.773653    6722 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 08:38:21.773699    6722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 08:38:21.781352    6722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 08:38:21.794626    6722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 08:38:21.807828    6722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0719 08:38:21.821419    6722 ssh_runner.go:195] Run: grep 192.169.0.36	control-plane.minikube.internal$ /etc/hosts
	I0719 08:38:21.824307    6722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 08:38:21.833974    6722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 08:38:21.937211    6722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 08:38:21.953536    6722 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000 for IP: 192.169.0.36
	I0719 08:38:21.953550    6722 certs.go:194] generating shared ca certs ...
	I0719 08:38:21.953560    6722 certs.go:226] acquiring lock for ca certs: {Name:mk53a80f7a907ce614169b0214a611bc7afa47b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 08:38:21.953746    6722 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1032/.minikube/ca.key
	I0719 08:38:21.953821    6722 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1032/.minikube/proxy-client-ca.key
	I0719 08:38:21.953831    6722 certs.go:256] generating profile certs ...
	I0719 08:38:21.953882    6722 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.key
	I0719 08:38:21.953897    6722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt with IP's: []
	I0719 08:38:22.066635    6722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt ...
	I0719 08:38:22.066653    6722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: {Name:mkf4d4e0ef33ea5702fbb2e4d52f81607b2fc643 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 08:38:22.066944    6722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.key ...
	I0719 08:38:22.066951    6722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.key: {Name:mkd76bd00fc31e626bdd9acfe1b0b611ef849bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 08:38:22.067158    6722 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.key.122b80aa
	I0719 08:38:22.067173    6722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.crt.122b80aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.36]
	I0719 08:38:22.154792    6722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.crt.122b80aa ...
	I0719 08:38:22.154807    6722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.crt.122b80aa: {Name:mk7e3124068d5205903f1ad91e56385a9cd69f8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 08:38:22.155105    6722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.key.122b80aa ...
	I0719 08:38:22.155115    6722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.key.122b80aa: {Name:mk60911a021bfedd7c69dec3f6477e7977524514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 08:38:22.155350    6722 certs.go:381] copying /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.crt.122b80aa -> /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.crt
	I0719 08:38:22.155570    6722 certs.go:385] copying /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.key.122b80aa -> /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.key
	I0719 08:38:22.155745    6722 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/proxy-client.key
	I0719 08:38:22.155763    6722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/proxy-client.crt with IP's: []
	I0719 08:38:22.307267    6722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/proxy-client.crt ...
	I0719 08:38:22.307282    6722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/proxy-client.crt: {Name:mkc478478ad901d1f6d7359123b74135ae4bd96b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 08:38:22.307589    6722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/proxy-client.key ...
	I0719 08:38:22.307598    6722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/proxy-client.key: {Name:mk3c4b13173f2293eb879afd4a8f3d0d88b92b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 08:38:22.308024    6722 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/1560.pem (1338 bytes)
	W0719 08:38:22.308079    6722 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/1560_empty.pem, impossibly tiny 0 bytes
	I0719 08:38:22.308089    6722 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 08:38:22.308119    6722 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/ca.pem (1082 bytes)
	I0719 08:38:22.308148    6722 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/cert.pem (1123 bytes)
	I0719 08:38:22.308176    6722 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/key.pem (1679 bytes)
	I0719 08:38:22.308243    6722 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1032/.minikube/files/etc/ssl/certs/15602.pem (1708 bytes)
	I0719 08:38:22.308678    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 08:38:22.329257    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 08:38:22.349086    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 08:38:22.368741    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 08:38:22.388604    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 08:38:22.408210    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 08:38:22.428105    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 08:38:22.447744    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 08:38:22.466994    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/files/etc/ssl/certs/15602.pem --> /usr/share/ca-certificates/15602.pem (1708 bytes)
	I0719 08:38:22.487002    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 08:38:22.506839    6722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1032/.minikube/certs/1560.pem --> /usr/share/ca-certificates/1560.pem (1338 bytes)
	I0719 08:38:22.526292    6722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 08:38:22.539769    6722 ssh_runner.go:195] Run: openssl version
	I0719 08:38:22.544344    6722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 08:38:22.552717    6722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 08:38:22.556168    6722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:21 /usr/share/ca-certificates/minikubeCA.pem
	I0719 08:38:22.556204    6722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 08:38:22.560416    6722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 08:38:22.568598    6722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1560.pem && ln -fs /usr/share/ca-certificates/1560.pem /etc/ssl/certs/1560.pem"
	I0719 08:38:22.576887    6722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1560.pem
	I0719 08:38:22.580306    6722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:30 /usr/share/ca-certificates/1560.pem
	I0719 08:38:22.580340    6722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1560.pem
	I0719 08:38:22.584638    6722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1560.pem /etc/ssl/certs/51391683.0"
	I0719 08:38:22.592890    6722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15602.pem && ln -fs /usr/share/ca-certificates/15602.pem /etc/ssl/certs/15602.pem"
	I0719 08:38:22.604205    6722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15602.pem
	I0719 08:38:22.612158    6722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:30 /usr/share/ca-certificates/15602.pem
	I0719 08:38:22.612212    6722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15602.pem
	I0719 08:38:22.617963    6722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15602.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 08:38:22.629822    6722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 08:38:22.634662    6722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 08:38:22.634707    6722 kubeadm.go:392] StartCluster: {Name:kindnet-248000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:kindnet-248000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.169.0.36 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 08:38:22.634804    6722 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 08:38:22.649936    6722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 08:38:22.661802    6722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 08:38:22.670118    6722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 08:38:22.678221    6722 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 08:38:22.678230    6722 kubeadm.go:157] found existing configuration files:
	
	I0719 08:38:22.678266    6722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 08:38:22.685992    6722 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 08:38:22.686029    6722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 08:38:22.694204    6722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 08:38:22.701826    6722 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 08:38:22.701868    6722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 08:38:22.709905    6722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 08:38:22.718349    6722 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 08:38:22.718403    6722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 08:38:22.726426    6722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 08:38:22.734278    6722 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 08:38:22.734316    6722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 08:38:22.742339    6722 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 08:38:22.781161    6722 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 08:38:22.781204    6722 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 08:38:22.870191    6722 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 08:38:22.870279    6722 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 08:38:22.870349    6722 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 08:38:23.041920    6722 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 08:38:23.065533    6722 out.go:204]   - Generating certificates and keys ...
	I0719 08:38:23.065617    6722 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 08:38:23.065708    6722 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 08:38:23.305316    6722 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 08:38:23.470904    6722 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 08:38:24.140415    6722 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 08:38:24.429330    6722 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 08:38:24.556391    6722 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 08:38:24.556567    6722 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-248000 localhost] and IPs [192.169.0.36 127.0.0.1 ::1]
	I0719 08:38:24.698284    6722 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 08:38:24.698487    6722 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-248000 localhost] and IPs [192.169.0.36 127.0.0.1 ::1]
	I0719 08:38:24.904300    6722 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 08:38:25.006678    6722 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 08:38:25.237613    6722 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 08:38:25.237753    6722 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 08:38:25.452824    6722 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 08:38:25.667420    6722 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 08:38:25.751521    6722 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 08:38:26.326811    6722 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 08:38:26.525516    6722 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 08:38:26.525964    6722 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 08:38:26.527500    6722 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 08:38:26.548755    6722 out.go:204]   - Booting up control plane ...
	I0719 08:38:26.548823    6722 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 08:38:26.548890    6722 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 08:38:26.548949    6722 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 08:38:26.549024    6722 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 08:38:26.549095    6722 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 08:38:26.549131    6722 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 08:38:26.651324    6722 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 08:38:26.651405    6722 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 08:38:27.652088    6722 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00150999s
	I0719 08:38:27.652163    6722 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 08:38:31.651879    6722 kubeadm.go:310] [api-check] The API server is healthy after 4.002133625s
	I0719 08:38:31.661201    6722 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 08:38:31.667709    6722 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 08:38:31.682953    6722 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 08:38:31.683106    6722 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-248000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 08:38:31.688613    6722 kubeadm.go:310] [bootstrap-token] Using token: 77f478.npbvsxtbcggey5v4
	I0719 08:38:31.726143    6722 out.go:204]   - Configuring RBAC rules ...
	I0719 08:38:31.726316    6722 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 08:38:31.728738    6722 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 08:38:31.775302    6722 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 08:38:31.777186    6722 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 08:38:31.780445    6722 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 08:38:31.782811    6722 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 08:38:32.058067    6722 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 08:38:32.500101    6722 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 08:38:33.057652    6722 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 08:38:33.058141    6722 kubeadm.go:310] 
	I0719 08:38:33.058192    6722 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 08:38:33.058197    6722 kubeadm.go:310] 
	I0719 08:38:33.058271    6722 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 08:38:33.058281    6722 kubeadm.go:310] 
	I0719 08:38:33.058314    6722 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 08:38:33.058377    6722 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 08:38:33.058418    6722 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 08:38:33.058424    6722 kubeadm.go:310] 
	I0719 08:38:33.058478    6722 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 08:38:33.058488    6722 kubeadm.go:310] 
	I0719 08:38:33.058536    6722 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 08:38:33.058546    6722 kubeadm.go:310] 
	I0719 08:38:33.058590    6722 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 08:38:33.058648    6722 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 08:38:33.058713    6722 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 08:38:33.058719    6722 kubeadm.go:310] 
	I0719 08:38:33.058790    6722 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 08:38:33.058855    6722 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 08:38:33.058863    6722 kubeadm.go:310] 
	I0719 08:38:33.058925    6722 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 77f478.npbvsxtbcggey5v4 \
	I0719 08:38:33.059012    6722 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe4e7b2321e4926396e07055ebba45385b01b2ebb7aad361c932ce23b3aa95f5 \
	I0719 08:38:33.059036    6722 kubeadm.go:310] 	--control-plane 
	I0719 08:38:33.059041    6722 kubeadm.go:310] 
	I0719 08:38:33.059107    6722 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 08:38:33.059111    6722 kubeadm.go:310] 
	I0719 08:38:33.059193    6722 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 77f478.npbvsxtbcggey5v4 \
	I0719 08:38:33.059266    6722 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fe4e7b2321e4926396e07055ebba45385b01b2ebb7aad361c932ce23b3aa95f5 
	I0719 08:38:33.059354    6722 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 08:38:33.059362    6722 cni.go:84] Creating CNI manager for "kindnet"
	I0719 08:38:33.079870    6722 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 08:38:33.136965    6722 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 08:38:33.142487    6722 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 08:38:33.142498    6722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 08:38:33.157569    6722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 08:38:33.354568    6722 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 08:38:33.354636    6722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 08:38:33.354635    6722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-248000 minikube.k8s.io/updated_at=2024_07_19T08_38_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=kindnet-248000 minikube.k8s.io/primary=true
	I0719 08:38:33.363675    6722 ops.go:34] apiserver oom_adj: -16
	I0719 08:38:33.466030    6722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 08:38:33.967004    6722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 08:38:34.466237    6722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> Docker <==
	Jul 19 15:38:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:38:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36'"
	Jul 19 15:38:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:38:24Z" level=error msg="error getting RW layer size for container ID 'f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:38:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:38:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a'"
	Jul 19 15:38:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:38:24Z" level=error msg="error getting RW layer size for container ID 'd44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:38:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:38:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910'"
	Jul 19 15:38:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:38:24Z" level=error msg="error getting RW layer size for container ID '77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:38:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:38:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID '77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0'"
	Jul 19 15:38:24 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 15:38:24 NoKubernetes-273000 dockerd[3693]: time="2024-07-19T15:38:24.874386323Z" level=info msg="Starting up"
	Jul 19 15:39:24 NoKubernetes-273000 dockerd[3693]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 15:39:24 NoKubernetes-273000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 15:39:24 NoKubernetes-273000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 15:39:24 NoKubernetes-273000 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 15:39:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:39:24Z" level=error msg="error getting RW layer size for container ID '77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:39:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:39:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID '77abac4779721eafb6c6513b5ef0fed1dd7b11c15082d0221aa0600708e016f0'"
	Jul 19 15:39:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:39:24Z" level=error msg="error getting RW layer size for container ID 'db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:39:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:39:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'db10d0ec237653e86ae8d8f8dc78d72484959c2d7716cfec82187b9514469b36'"
	Jul 19 15:39:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:39:24Z" level=error msg="error getting RW layer size for container ID 'd44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:39:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:39:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd44c0ff04c297b29a6cee9f5ce2b6ff52f816d5cc04f322be992dcd338e5b910'"
	Jul 19 15:39:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:39:24Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:39:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:39:24Z" level=error msg="error getting RW layer size for container ID 'f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 15:39:24 NoKubernetes-273000 cri-dockerd[1134]: time="2024-07-19T15:39:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f5e7c80426981ff3a38a2a9741022e2e76c72061c028d2201184be312783945a'"
	Jul 19 15:39:25 NoKubernetes-273000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jul 19 15:39:25 NoKubernetes-273000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 15:39:25 NoKubernetes-273000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-19T15:39:25Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[  +0.292831] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.109789] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[  +0.107977] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +2.603909] kauditd_printk_skb: 182 callbacks suppressed
	[  +0.283419] systemd-fstab-generator[1087]: Ignoring "noauto" option for root device
	[  +0.095956] systemd-fstab-generator[1099]: Ignoring "noauto" option for root device
	[  +0.103337] systemd-fstab-generator[1111]: Ignoring "noauto" option for root device
	[  +0.120185] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[Jul19 15:34] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.052077] kauditd_printk_skb: 85 callbacks suppressed
	[  +2.847565] systemd-fstab-generator[1472]: Ignoring "noauto" option for root device
	[  +4.904691] systemd-fstab-generator[1670]: Ignoring "noauto" option for root device
	[  +0.054164] kauditd_printk_skb: 70 callbacks suppressed
	[  +4.963072] systemd-fstab-generator[2075]: Ignoring "noauto" option for root device
	[  +0.081650] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.188013] systemd-fstab-generator[2137]: Ignoring "noauto" option for root device
	[  +3.286630] systemd-fstab-generator[2537]: Ignoring "noauto" option for root device
	[  +0.230885] systemd-fstab-generator[2572]: Ignoring "noauto" option for root device
	[  +0.102784] systemd-fstab-generator[2584]: Ignoring "noauto" option for root device
	[  +0.108484] systemd-fstab-generator[2598]: Ignoring "noauto" option for root device
	[Jul19 15:37] systemd-fstab-generator[3464]: Ignoring "noauto" option for root device
	[  +0.049982] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.171319] systemd-fstab-generator[3498]: Ignoring "noauto" option for root device
	[  +0.107646] systemd-fstab-generator[3510]: Ignoring "noauto" option for root device
	[  +0.125789] systemd-fstab-generator[3524]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 15:40:25 up 6 min,  0 users,  load average: 0.08, 0.07, 0.04
	Linux NoKubernetes-273000 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.748535    2083 topology_manager.go:215] "Topology Admit Handler" podUID="38905a991faed697c79d359036912659" podNamespace="kube-system" podName="etcd-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.748560    2083 topology_manager.go:215] "Topology Admit Handler" podUID="b1cf1983122fefe442619a5392214cd5" podNamespace="kube-system" podName="kube-apiserver-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813194    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/38905a991faed697c79d359036912659-etcd-data\") pod \"etcd-nokubernetes-273000\" (UID: \"38905a991faed697c79d359036912659\") " pod="kube-system/etcd-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813291    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1cf1983122fefe442619a5392214cd5-ca-certs\") pod \"kube-apiserver-nokubernetes-273000\" (UID: \"b1cf1983122fefe442619a5392214cd5\") " pod="kube-system/kube-apiserver-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813324    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1cf1983122fefe442619a5392214cd5-usr-share-ca-certificates\") pod \"kube-apiserver-nokubernetes-273000\" (UID: \"b1cf1983122fefe442619a5392214cd5\") " pod="kube-system/kube-apiserver-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813352    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e20c5ec4911b4548ca49d7a672bc595-ca-certs\") pod \"kube-controller-manager-nokubernetes-273000\" (UID: \"3e20c5ec4911b4548ca49d7a672bc595\") " pod="kube-system/kube-controller-manager-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813378    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e20c5ec4911b4548ca49d7a672bc595-kubeconfig\") pod \"kube-controller-manager-nokubernetes-273000\" (UID: \"3e20c5ec4911b4548ca49d7a672bc595\") " pod="kube-system/kube-controller-manager-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813428    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e20c5ec4911b4548ca49d7a672bc595-usr-share-ca-certificates\") pod \"kube-controller-manager-nokubernetes-273000\" (UID: \"3e20c5ec4911b4548ca49d7a672bc595\") " pod="kube-system/kube-controller-manager-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813455    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be2735a5b7e8fc4b1ae22e9b18314521-kubeconfig\") pod \"kube-scheduler-nokubernetes-273000\" (UID: \"be2735a5b7e8fc4b1ae22e9b18314521\") " pod="kube-system/kube-scheduler-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813477    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/38905a991faed697c79d359036912659-etcd-certs\") pod \"etcd-nokubernetes-273000\" (UID: \"38905a991faed697c79d359036912659\") " pod="kube-system/etcd-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813507    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1cf1983122fefe442619a5392214cd5-k8s-certs\") pod \"kube-apiserver-nokubernetes-273000\" (UID: \"b1cf1983122fefe442619a5392214cd5\") " pod="kube-system/kube-apiserver-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813542    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e20c5ec4911b4548ca49d7a672bc595-flexvolume-dir\") pod \"kube-controller-manager-nokubernetes-273000\" (UID: \"3e20c5ec4911b4548ca49d7a672bc595\") " pod="kube-system/kube-controller-manager-nokubernetes-273000"
	Jul 19 15:34:17 NoKubernetes-273000 kubelet[2083]: I0719 15:34:17.813574    2083 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e20c5ec4911b4548ca49d7a672bc595-k8s-certs\") pod \"kube-controller-manager-nokubernetes-273000\" (UID: \"3e20c5ec4911b4548ca49d7a672bc595\") " pod="kube-system/kube-controller-manager-nokubernetes-273000"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.595897    2083 apiserver.go:52] "Watching apiserver"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.611181    2083 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: E0719 15:34:18.705938    2083 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-nokubernetes-273000\" already exists" pod="kube-system/kube-apiserver-nokubernetes-273000"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: E0719 15:34:18.707679    2083 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"etcd-nokubernetes-273000\" already exists" pod="kube-system/etcd-nokubernetes-273000"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.724519    2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-nokubernetes-273000" podStartSLOduration=1.7245054579999999 podStartE2EDuration="1.724505458s" podCreationTimestamp="2024-07-19 15:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 15:34:18.718312154 +0000 UTC m=+1.210407163" watchObservedRunningTime="2024-07-19 15:34:18.724505458 +0000 UTC m=+1.216600467"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.738073    2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-nokubernetes-273000" podStartSLOduration=1.738061965 podStartE2EDuration="1.738061965s" podCreationTimestamp="2024-07-19 15:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 15:34:18.725024053 +0000 UTC m=+1.217119069" watchObservedRunningTime="2024-07-19 15:34:18.738061965 +0000 UTC m=+1.230156969"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.748033    2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-nokubernetes-273000" podStartSLOduration=1.7480206539999998 podStartE2EDuration="1.748020654s" podCreationTimestamp="2024-07-19 15:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 15:34:18.73831162 +0000 UTC m=+1.230406629" watchObservedRunningTime="2024-07-19 15:34:18.748020654 +0000 UTC m=+1.240115656"
	Jul 19 15:34:18 NoKubernetes-273000 kubelet[2083]: I0719 15:34:18.748193    2083 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-nokubernetes-273000" podStartSLOduration=1.7481878530000001 podStartE2EDuration="1.748187853s" podCreationTimestamp="2024-07-19 15:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 15:34:18.747976117 +0000 UTC m=+1.240071120" watchObservedRunningTime="2024-07-19 15:34:18.748187853 +0000 UTC m=+1.240282856"
	Jul 19 15:34:20 NoKubernetes-273000 kubelet[2083]: I0719 15:34:20.280047    2083 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jul 19 15:34:21 NoKubernetes-273000 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jul 19 15:34:21 NoKubernetes-273000 systemd[1]: kubelet.service: Deactivated successfully.
	Jul 19 15:34:21 NoKubernetes-273000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 08:39:24.719746    6755 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:39:24.730259    6755 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:39:24.742140    6755 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:39:24.752753    6755 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:39:24.763986    6755 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:39:24.775824    6755 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:39:24.788267    6755 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 08:39:24.800436    6755 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p NoKubernetes-273000 -n NoKubernetes-273000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p NoKubernetes-273000 -n NoKubernetes-273000: exit status 2 (153.502556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "NoKubernetes-273000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (180.35s)

                                                
                                    

Test pass (314/339)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.3
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.3/json-events 14.29
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.29
18 TestDownloadOnly/v1.30.3/DeleteAll 0.23
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 15.16
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.2
30 TestBinaryMirror 0.93
31 TestOffline 171.87
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
36 TestAddons/Setup 232.08
38 TestAddons/parallel/Registry 14.19
39 TestAddons/parallel/Ingress 19.25
40 TestAddons/parallel/InspektorGadget 10.47
41 TestAddons/parallel/MetricsServer 5.49
42 TestAddons/parallel/HelmTiller 10.79
44 TestAddons/parallel/CSI 67.85
45 TestAddons/parallel/Headlamp 12.07
46 TestAddons/parallel/CloudSpanner 5.37
47 TestAddons/parallel/LocalPath 9.79
48 TestAddons/parallel/NvidiaDevicePlugin 5.31
49 TestAddons/parallel/Yakd 6.01
50 TestAddons/parallel/Volcano 40.15
53 TestAddons/serial/GCPAuth/Namespaces 0.09
54 TestAddons/StoppedEnableDisable 5.9
55 TestCertOptions 38.47
56 TestCertExpiration 258.3
57 TestDockerFlags 160.65
58 TestForceSystemdFlag 43.47
59 TestForceSystemdEnv 158.3
62 TestHyperKitDriverInstallOrUpdate 9.07
65 TestErrorSpam/setup 38.45
66 TestErrorSpam/start 1.37
67 TestErrorSpam/status 0.49
68 TestErrorSpam/pause 1.36
69 TestErrorSpam/unpause 1.33
70 TestErrorSpam/stop 153.81
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 91.64
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 38.33
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.11
82 TestFunctional/serial/CacheCmd/cache/add_local 1.34
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.11
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.15
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.44
90 TestFunctional/serial/ExtraConfig 41.64
91 TestFunctional/serial/ComponentHealth 0.05
92 TestFunctional/serial/LogsCmd 2.67
93 TestFunctional/serial/LogsFileCmd 2.74
94 TestFunctional/serial/InvalidService 4.3
96 TestFunctional/parallel/ConfigCmd 0.52
97 TestFunctional/parallel/DashboardCmd 10.35
98 TestFunctional/parallel/DryRun 1.3
99 TestFunctional/parallel/InternationalLanguage 0.61
100 TestFunctional/parallel/StatusCmd 0.51
104 TestFunctional/parallel/ServiceCmdConnect 8.59
105 TestFunctional/parallel/AddonsCmd 0.26
106 TestFunctional/parallel/PersistentVolumeClaim 40.45
108 TestFunctional/parallel/SSHCmd 0.29
109 TestFunctional/parallel/CpCmd 1.02
110 TestFunctional/parallel/MySQL 26.42
111 TestFunctional/parallel/FileSync 0.25
112 TestFunctional/parallel/CertSync 1.09
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.2
120 TestFunctional/parallel/License 0.63
121 TestFunctional/parallel/Version/short 0.12
122 TestFunctional/parallel/Version/components 0.44
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.15
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.16
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.04
128 TestFunctional/parallel/ImageCommands/Setup 1.87
129 TestFunctional/parallel/DockerEnv/bash 0.64
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.98
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.63
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.45
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
140 TestFunctional/parallel/ServiceCmd/DeployApp 20.13
141 TestFunctional/parallel/ServiceCmd/List 0.18
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.18
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.24
144 TestFunctional/parallel/ServiceCmd/Format 0.24
145 TestFunctional/parallel/ServiceCmd/URL 0.25
147 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
148 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.14
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
158 TestFunctional/parallel/ProfileCmd/profile_list 0.26
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
160 TestFunctional/parallel/MountCmd/any-port 8.09
161 TestFunctional/parallel/MountCmd/specific-port 1.48
162 TestFunctional/parallel/MountCmd/VerifyCleanup 2.01
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 319.07
170 TestMultiControlPlane/serial/DeployApp 4.87
171 TestMultiControlPlane/serial/PingHostFromPods 1.28
172 TestMultiControlPlane/serial/AddWorkerNode 51.71
173 TestMultiControlPlane/serial/NodeLabels 0.05
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.33
175 TestMultiControlPlane/serial/CopyFile 8.9
176 TestMultiControlPlane/serial/StopSecondaryNode 8.68
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.26
178 TestMultiControlPlane/serial/RestartSecondaryNode 40.47
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.37
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 215.69
181 TestMultiControlPlane/serial/DeleteSecondaryNode 8.14
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.26
183 TestMultiControlPlane/serial/StopCluster 24.93
184 TestMultiControlPlane/serial/RestartCluster 127.84
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.25
186 TestMultiControlPlane/serial/AddSecondaryNode 77.14
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.33
190 TestImageBuild/serial/Setup 153.78
191 TestImageBuild/serial/NormalBuild 1.33
192 TestImageBuild/serial/BuildWithBuildArg 0.5
193 TestImageBuild/serial/BuildWithDockerIgnore 0.25
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.22
198 TestJSONOutput/start/Command 52.17
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.46
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.47
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 8.33
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.57
226 TestMainNoArgs 0.08
227 TestMinikubeProfile 92.56
230 TestMountStart/serial/StartWithMountFirst 21.37
231 TestMountStart/serial/VerifyMountFirst 0.31
235 TestMultiNode/serial/FreshStart2Nodes 488.79
236 TestMultiNode/serial/DeployApp2Nodes 4.27
237 TestMultiNode/serial/PingHostFrom2Pods 0.9
238 TestMultiNode/serial/AddNode 47.98
239 TestMultiNode/serial/MultiNodeLabels 0.05
240 TestMultiNode/serial/ProfileList 0.17
241 TestMultiNode/serial/CopyFile 5.3
242 TestMultiNode/serial/StopNode 2.85
243 TestMultiNode/serial/StartAfterStop 36.51
244 TestMultiNode/serial/RestartKeepsNodes 200.17
245 TestMultiNode/serial/DeleteNode 3.35
246 TestMultiNode/serial/StopMultiNode 16.76
247 TestMultiNode/serial/RestartMultiNode 139.32
248 TestMultiNode/serial/ValidateNameConflict 44.58
252 TestPreload 162.2
254 TestScheduledStopUnix 223.86
255 TestSkaffold 114.93
258 TestRunningBinaryUpgrade 96.73
260 TestKubernetesUpgrade 234.07
273 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.52
274 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.24
275 TestStoppedBinaryUpgrade/Setup 1.06
276 TestStoppedBinaryUpgrade/Upgrade 103.59
277 TestStoppedBinaryUpgrade/MinikubeLogs 2.64
279 TestPause/serial/Start 92.96
280 TestPause/serial/SecondStartNoReconfiguration 36.63
289 TestNoKubernetes/serial/StartNoK8sWithVersion 0.79
290 TestNoKubernetes/serial/StartWithK8s 41.35
291 TestPause/serial/Pause 0.53
292 TestPause/serial/VerifyStatus 0.16
293 TestPause/serial/Unpause 0.51
294 TestPause/serial/PauseAgain 0.57
295 TestPause/serial/DeletePaused 5.25
296 TestPause/serial/VerifyDeletedResources 0.19
297 TestNetworkPlugins/group/auto/Start 205.73
299 TestNetworkPlugins/group/auto/KubeletFlags 0.23
300 TestNetworkPlugins/group/auto/NetCatPod 12.16
302 TestNetworkPlugins/group/auto/DNS 0.12
303 TestNetworkPlugins/group/auto/Localhost 0.1
304 TestNetworkPlugins/group/auto/HairPin 0.1
305 TestNetworkPlugins/group/kindnet/Start 73.69
306 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
307 TestNetworkPlugins/group/kindnet/KubeletFlags 0.16
308 TestNetworkPlugins/group/kindnet/NetCatPod 11.13
309 TestNetworkPlugins/group/kindnet/DNS 0.14
310 TestNetworkPlugins/group/kindnet/Localhost 0.09
311 TestNetworkPlugins/group/kindnet/HairPin 0.1
312 TestNetworkPlugins/group/calico/Start 84.12
313 TestNetworkPlugins/group/custom-flannel/Start 63.9
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.16
316 TestNetworkPlugins/group/calico/NetCatPod 10.14
317 TestNetworkPlugins/group/calico/DNS 0.12
318 TestNetworkPlugins/group/calico/Localhost 0.1
319 TestNetworkPlugins/group/calico/HairPin 0.11
320 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.16
321 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.15
322 TestNetworkPlugins/group/false/Start 57.08
323 TestNetworkPlugins/group/custom-flannel/DNS 0.12
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
326 TestNetworkPlugins/group/enable-default-cni/Start 52.82
327 TestNetworkPlugins/group/false/KubeletFlags 0.16
328 TestNetworkPlugins/group/false/NetCatPod 11.18
329 TestNetworkPlugins/group/false/DNS 0.12
330 TestNetworkPlugins/group/false/Localhost 0.1
331 TestNetworkPlugins/group/false/HairPin 0.1
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.17
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.14
334 TestNetworkPlugins/group/flannel/Start 61.76
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
338 TestNetworkPlugins/group/bridge/Start 91.75
339 TestNetworkPlugins/group/flannel/ControllerPod 6.01
340 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
341 TestNetworkPlugins/group/flannel/NetCatPod 11.14
342 TestNetworkPlugins/group/flannel/DNS 0.13
343 TestNetworkPlugins/group/flannel/Localhost 0.11
344 TestNetworkPlugins/group/flannel/HairPin 0.11
345 TestNetworkPlugins/group/kubenet/Start 90.24
346 TestNetworkPlugins/group/bridge/KubeletFlags 0.15
347 TestNetworkPlugins/group/bridge/NetCatPod 12.14
348 TestNetworkPlugins/group/bridge/DNS 0.14
349 TestNetworkPlugins/group/bridge/Localhost 0.11
350 TestNetworkPlugins/group/bridge/HairPin 0.12
352 TestStartStop/group/old-k8s-version/serial/FirstStart 163.33
353 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
354 TestNetworkPlugins/group/kubenet/NetCatPod 11.14
355 TestNetworkPlugins/group/kubenet/DNS 0.13
356 TestNetworkPlugins/group/kubenet/Localhost 0.1
357 TestNetworkPlugins/group/kubenet/HairPin 0.1
359 TestStartStop/group/no-preload/serial/FirstStart 68.6
360 TestStartStop/group/no-preload/serial/DeployApp 8.21
361 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.81
362 TestStartStop/group/no-preload/serial/Stop 8.44
363 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
364 TestStartStop/group/no-preload/serial/SecondStart 293.93
365 TestStartStop/group/old-k8s-version/serial/DeployApp 8.34
366 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.74
367 TestStartStop/group/old-k8s-version/serial/Stop 8.41
368 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
369 TestStartStop/group/old-k8s-version/serial/SecondStart 383.87
370 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
371 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
372 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
373 TestStartStop/group/no-preload/serial/Pause 1.95
375 TestStartStop/group/embed-certs/serial/FirstStart 90.57
376 TestStartStop/group/embed-certs/serial/DeployApp 9.21
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
379 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.8
380 TestStartStop/group/embed-certs/serial/Stop 8.45
381 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.16
382 TestStartStop/group/old-k8s-version/serial/Pause 1.89
383 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
384 TestStartStop/group/embed-certs/serial/SecondStart 311.23
386 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 128.98
387 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.2
388 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.77
389 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.45
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.34
391 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 310.67
392 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
393 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
394 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
395 TestStartStop/group/embed-certs/serial/Pause 1.96
397 TestStartStop/group/newest-cni/serial/FirstStart 157.92
398 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
399 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.16
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.25
402 TestStartStop/group/newest-cni/serial/DeployApp 0
403 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.82
404 TestStartStop/group/newest-cni/serial/Stop 8.43
405 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
406 TestStartStop/group/newest-cni/serial/SecondStart 51.18
407 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
409 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.17
410 TestStartStop/group/newest-cni/serial/Pause 1.82
x
+
TestDownloadOnly/v1.20.0/json-events (21.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-487000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-487000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (21.303197023s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (21.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-487000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-487000: exit status 85 (289.386654ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-487000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |          |
	|         | -p download-only-487000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 07:19:42
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 07:19:42.502626    1562 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:19:42.502897    1562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:19:42.502902    1562 out.go:304] Setting ErrFile to fd 2...
	I0719 07:19:42.502906    1562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:19:42.503068    1562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	W0719 07:19:42.503160    1562 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19302-1032/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19302-1032/.minikube/config/config.json: no such file or directory
	I0719 07:19:42.504941    1562 out.go:298] Setting JSON to true
	I0719 07:19:42.527489    1562 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1153,"bootTime":1721397629,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0719 07:19:42.527615    1562 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:19:42.549233    1562 out.go:97] [download-only-487000] minikube v1.33.1 on Darwin 14.5
	I0719 07:19:42.549464    1562 notify.go:220] Checking for updates...
	W0719 07:19:42.549465    1562 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 07:19:42.570895    1562 out.go:169] MINIKUBE_LOCATION=19302
	I0719 07:19:42.592074    1562 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	I0719 07:19:42.613848    1562 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 07:19:42.634977    1562 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:19:42.656190    1562 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	W0719 07:19:42.697961    1562 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 07:19:42.698500    1562 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:19:42.749131    1562 out.go:97] Using the hyperkit driver based on user configuration
	I0719 07:19:42.749191    1562 start.go:297] selected driver: hyperkit
	I0719 07:19:42.749202    1562 start.go:901] validating driver "hyperkit" against <nil>
	I0719 07:19:42.749423    1562 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:19:42.749791    1562 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1032/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0719 07:19:43.153375    1562 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0719 07:19:43.158133    1562 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:19:43.158155    1562 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0719 07:19:43.158184    1562 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:19:43.162088    1562 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0719 07:19:43.162497    1562 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 07:19:43.162523    1562 cni.go:84] Creating CNI manager for ""
	I0719 07:19:43.162539    1562 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 07:19:43.162612    1562 start.go:340] cluster config:
	{Name:download-only-487000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-487000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:19:43.162850    1562 iso.go:125] acquiring lock: {Name:mkadb9ba7febb03c49d2e1dd7dfa4b91b2759763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:19:43.183865    1562 out.go:97] Downloading VM boot image ...
	I0719 07:19:43.183913    1562 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 07:19:51.653694    1562 out.go:97] Starting "download-only-487000" primary control-plane node in "download-only-487000" cluster
	I0719 07:19:51.653733    1562 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:19:51.708687    1562 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0719 07:19:51.708736    1562 cache.go:56] Caching tarball of preloaded images
	I0719 07:19:51.709131    1562 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:19:51.729827    1562 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 07:19:51.729854    1562 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 07:19:51.812664    1562 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0719 07:19:59.180435    1562 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 07:19:59.180648    1562 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 07:19:59.726131    1562 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 07:19:59.726362    1562 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/download-only-487000/config.json ...
	I0719 07:19:59.726386    1562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/download-only-487000/config.json: {Name:mk54ce52354101adb988f5a5a72241dbe5432003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:19:59.726752    1562 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:19:59.727108    1562 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-487000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-487000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-487000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (14.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-415000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-415000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit : (14.294589167s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (14.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-415000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-415000: exit status 85 (291.904958ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-487000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
	|         | -p download-only-487000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| delete  | -p download-only-487000        | download-only-487000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| start   | -o=json --download-only        | download-only-415000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | -p download-only-415000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 07:20:04
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 07:20:04.533656    1586 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:20:04.533834    1586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:20:04.533840    1586 out.go:304] Setting ErrFile to fd 2...
	I0719 07:20:04.533843    1586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:20:04.534022    1586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	I0719 07:20:04.535599    1586 out.go:298] Setting JSON to true
	I0719 07:20:04.558542    1586 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1175,"bootTime":1721397629,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0719 07:20:04.558623    1586 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:20:04.579876    1586 out.go:97] [download-only-415000] minikube v1.33.1 on Darwin 14.5
	I0719 07:20:04.580075    1586 notify.go:220] Checking for updates...
	I0719 07:20:04.601479    1586 out.go:169] MINIKUBE_LOCATION=19302
	I0719 07:20:04.622563    1586 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	I0719 07:20:04.645810    1586 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 07:20:04.666529    1586 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:20:04.687667    1586 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	W0719 07:20:04.729633    1586 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 07:20:04.730116    1586 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:20:04.760636    1586 out.go:97] Using the hyperkit driver based on user configuration
	I0719 07:20:04.760690    1586 start.go:297] selected driver: hyperkit
	I0719 07:20:04.760702    1586 start.go:901] validating driver "hyperkit" against <nil>
	I0719 07:20:04.760906    1586 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:20:04.761176    1586 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1032/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0719 07:20:04.770843    1586 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0719 07:20:04.774644    1586 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:20:04.774679    1586 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0719 07:20:04.774713    1586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:20:04.777356    1586 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0719 07:20:04.777545    1586 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 07:20:04.777584    1586 cni.go:84] Creating CNI manager for ""
	I0719 07:20:04.777602    1586 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:20:04.777612    1586 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:20:04.777675    1586 start.go:340] cluster config:
	{Name:download-only-415000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-415000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:20:04.777760    1586 iso.go:125] acquiring lock: {Name:mkadb9ba7febb03c49d2e1dd7dfa4b91b2759763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:20:04.798494    1586 out.go:97] Starting "download-only-415000" primary control-plane node in "download-only-415000" cluster
	I0719 07:20:04.798539    1586 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:20:04.864098    1586 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 07:20:04.864166    1586 cache.go:56] Caching tarball of preloaded images
	I0719 07:20:04.864716    1586 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:20:04.886286    1586 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 07:20:04.886319    1586 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0719 07:20:04.970182    1586 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 07:20:14.083928    1586 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0719 07:20:14.084348    1586 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-415000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-415000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-415000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (15.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-875000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-875000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit : (15.155070735s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (15.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-875000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-875000: exit status 85 (292.604083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-487000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
	|         | -p download-only-487000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| delete  | -p download-only-487000             | download-only-487000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| start   | -o=json --download-only             | download-only-415000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | -p download-only-415000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| delete  | -p download-only-415000             | download-only-415000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| start   | -o=json --download-only             | download-only-875000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | -p download-only-875000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 07:20:19
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 07:20:19.556353    1610 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:20:19.556939    1610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:20:19.556946    1610 out.go:304] Setting ErrFile to fd 2...
	I0719 07:20:19.556950    1610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:20:19.557638    1610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	I0719 07:20:19.559191    1610 out.go:298] Setting JSON to true
	I0719 07:20:19.582109    1610 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1190,"bootTime":1721397629,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0719 07:20:19.582198    1610 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:20:19.603514    1610 out.go:97] [download-only-875000] minikube v1.33.1 on Darwin 14.5
	I0719 07:20:19.603727    1610 notify.go:220] Checking for updates...
	I0719 07:20:19.625558    1610 out.go:169] MINIKUBE_LOCATION=19302
	I0719 07:20:19.647312    1610 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	I0719 07:20:19.668489    1610 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 07:20:19.696656    1610 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:20:19.717315    1610 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	W0719 07:20:19.759144    1610 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 07:20:19.759638    1610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:20:19.789223    1610 out.go:97] Using the hyperkit driver based on user configuration
	I0719 07:20:19.789275    1610 start.go:297] selected driver: hyperkit
	I0719 07:20:19.789283    1610 start.go:901] validating driver "hyperkit" against <nil>
	I0719 07:20:19.789550    1610 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:20:19.789777    1610 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1032/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0719 07:20:19.799500    1610 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0719 07:20:19.803315    1610 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:20:19.803347    1610 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0719 07:20:19.803374    1610 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:20:19.806033    1610 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0719 07:20:19.806188    1610 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 07:20:19.806231    1610 cni.go:84] Creating CNI manager for ""
	I0719 07:20:19.806249    1610 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:20:19.806257    1610 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:20:19.806326    1610 start.go:340] cluster config:
	{Name:download-only-875000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:20:19.806414    1610 iso.go:125] acquiring lock: {Name:mkadb9ba7febb03c49d2e1dd7dfa4b91b2759763 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:20:19.826990    1610 out.go:97] Starting "download-only-875000" primary control-plane node in "download-only-875000" cluster
	I0719 07:20:19.827025    1610 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:20:19.890693    1610 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 07:20:19.890759    1610 cache.go:56] Caching tarball of preloaded images
	I0719 07:20:19.891159    1610 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:20:19.914867    1610 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 07:20:19.914928    1610 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 07:20:19.996581    1610 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 07:20:29.424638    1610 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 07:20:29.424819    1610 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 07:20:29.892915    1610 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 07:20:29.893173    1610 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/download-only-875000/config.json ...
	I0719 07:20:29.893199    1610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/download-only-875000/config.json: {Name:mka584ba8731e1c7ea2aa77b2d45db4860b1def5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:20:29.893555    1610 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:20:29.893790    1610 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1032/.minikube/cache/darwin/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-875000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-875000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-875000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestBinaryMirror (0.93s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-843000 --alsologtostderr --binary-mirror http://127.0.0.1:49821 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-843000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-843000
--- PASS: TestBinaryMirror (0.93s)

                                                
                                    
x
+
TestOffline (171.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-505000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-505000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (2m46.625589588s)
helpers_test.go:175: Cleaning up "offline-docker-505000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-505000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-505000: (5.241830508s)
--- PASS: TestOffline (171.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-870000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-870000: exit status 85 (186.604079ms)

                                                
                                                
-- stdout --
	* Profile "addons-870000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-870000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-870000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-870000: exit status 85 (207.976069ms)

                                                
                                                
-- stdout --
	* Profile "addons-870000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-870000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (232.08s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-870000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-870000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m52.083433604s)
--- PASS: TestAddons/Setup (232.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 11.064446ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-5xr6d" [75e83982-11d8-4b3e-ae58-298c7fea81e7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005668065s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fjkw9" [41b3b129-a40f-4ad6-9e13-43c3181efee7] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004244954s
addons_test.go:342: (dbg) Run:  kubectl --context addons-870000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-870000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-870000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.505041048s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 ip
2024/07/19 07:24:42 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.19s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-870000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-870000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-870000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b1e2a564-4fd8-42ce-bd54-68da20cdc82b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b1e2a564-4fd8-42ce-bd54-68da20cdc82b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003851235s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-870000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-870000 addons disable ingress --alsologtostderr -v=1: (7.442976759s)
--- PASS: TestAddons/parallel/Ingress (19.25s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.47s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jtvf4" [ea774b87-ee60-41dc-930d-8e091824f255] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004426311s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-870000
addons_test.go:843: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-870000: (5.46382343s)
--- PASS: TestAddons/parallel/InspektorGadget (10.47s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.483759ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-6lmz5" [b153ec25-8622-4a73-bcc0-c20220ab2ecf] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005567117s
addons_test.go:417: (dbg) Run:  kubectl --context addons-870000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.49s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.605073ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-w64nn" [39ce1d00-7599-407d-9cd1-4f4775806bf2] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005329366s
addons_test.go:475: (dbg) Run:  kubectl --context addons-870000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-870000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.390043076s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.148073ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-870000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-870000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d492c62a-fa65-4036-8907-6446b9d057f3] Pending
helpers_test.go:344: "task-pv-pod" [d492c62a-fa65-4036-8907-6446b9d057f3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d492c62a-fa65-4036-8907-6446b9d057f3] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00465881s
addons_test.go:586: (dbg) Run:  kubectl --context addons-870000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-870000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-870000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-870000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-870000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-870000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-870000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6fbc625b-fc37-4bac-ac6a-76d25eafb3a7] Pending
helpers_test.go:344: "task-pv-pod-restore" [6fbc625b-fc37-4bac-ac6a-76d25eafb3a7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6fbc625b-fc37-4bac-ac6a-76d25eafb3a7] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.001905381s
addons_test.go:628: (dbg) Run:  kubectl --context addons-870000 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-870000 delete pod task-pv-pod-restore: (1.013022148s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-870000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-870000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-amd64 -p addons-870000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.463196446s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (67.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-870000 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-870000 --alsologtostderr -v=1: (1.06704281s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-4gljp" [fc1a1c8e-c330-4ae7-9e54-cb5647c7b1ba] Pending
helpers_test.go:344: "headlamp-7867546754-4gljp" [fc1a1c8e-c330-4ae7-9e54-cb5647c7b1ba] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-4gljp" [fc1a1c8e-c330-4ae7-9e54-cb5647c7b1ba] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004460199s
--- PASS: TestAddons/parallel/Headlamp (12.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-cdmv2" [252251e2-c9c2-49bd-8972-5cae8b925d84] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004838012s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-870000
--- PASS: TestAddons/parallel/CloudSpanner (5.37s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-870000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-870000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-870000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [383afbe0-8994-4dfe-85db-85bed49cef02] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [383afbe0-8994-4dfe-85db-85bed49cef02] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [383afbe0-8994-4dfe-85db-85bed49cef02] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00423124s
addons_test.go:992: (dbg) Run:  kubectl --context addons-870000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 ssh "cat /opt/local-path-provisioner/pvc-c721cb30-95d9-4621-ae84-a6ee078ac577_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-870000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-870000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-x5qk4" [7f1b2164-3c3f-49e0-80c7-bc7685d87f45] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004820356s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-870000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-g6d5g" [9cd3929e-5fc9-45d5-bf8c-b054be8b4b98] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004329759s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (40.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 1.643898ms
addons_test.go:897: volcano-admission stabilized in 1.776791ms
addons_test.go:889: volcano-scheduler stabilized in 2.351058ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-42ztx" [f709c2fa-0aeb-437a-a38a-d1550610ae9a] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.004525194s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-bk48j" [49d0802c-25fd-4c02-8ceb-f08d5a5fed53] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.004186989s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-fsnpg" [71d51d2b-50a1-4e0c-afdf-5757d443debd] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.005181778s
addons_test.go:924: (dbg) Run:  kubectl --context addons-870000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-870000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-870000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ee40a89f-e352-43da-be6b-1f42f15aa37b] Pending
helpers_test.go:344: "test-job-nginx-0" [ee40a89f-e352-43da-be6b-1f42f15aa37b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ee40a89f-e352-43da-be6b-1f42f15aa37b] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 15.003801615s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-amd64 -p addons-870000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-amd64 -p addons-870000 addons disable volcano --alsologtostderr -v=1: (9.902806002s)
--- PASS: TestAddons/parallel/Volcano (40.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-870000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-870000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.9s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-870000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-870000: (5.369381343s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-870000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-870000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-870000
--- PASS: TestAddons/StoppedEnableDisable (5.90s)

                                                
                                    
x
+
TestCertOptions (38.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-585000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0719 08:27:55.352960    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:27:59.655601    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-585000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (34.742463029s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-585000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-585000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-585000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-585000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-585000: (3.378472431s)
--- PASS: TestCertOptions (38.47s)

                                                
                                    
x
+
TestCertExpiration (258.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-458000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-458000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (38.498837963s)
E0719 08:26:33.424109    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:33.430217    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:33.441942    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:33.462950    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:33.504329    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:33.585822    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:33.747305    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:34.067480    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:34.707978    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:35.989627    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:38.551380    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:43.671480    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:26:53.913032    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:27:14.392685    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-458000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-458000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (34.546736362s)
helpers_test.go:175: Cleaning up "cert-expiration-458000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-458000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-458000: (5.256468101s)
--- PASS: TestCertExpiration (258.30s)

                                                
                                    
x
+
TestDockerFlags (160.65s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-569000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-569000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (2m36.902537314s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-569000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-569000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-569000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-569000: (3.428474775s)
--- PASS: TestDockerFlags (160.65s)

                                                
                                    
x
+
TestForceSystemdFlag (43.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-828000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-828000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (38.004305813s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-828000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-828000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-828000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-828000: (5.305172389s)
--- PASS: TestForceSystemdFlag (43.47s)

                                                
                                    
x
+
TestForceSystemdEnv (158.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-618000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0719 08:23:16.608036    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 08:24:29.238527    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-618000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (2m32.895080122s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-618000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-618000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-618000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-618000: (5.241377852s)
--- PASS: TestForceSystemdEnv (158.30s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.07s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.07s)

                                                
                                    
x
+
TestErrorSpam/setup (38.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-457000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-457000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 --driver=hyperkit : (38.445168165s)
--- PASS: TestErrorSpam/setup (38.45s)

                                                
                                    
x
+
TestErrorSpam/start (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 start --dry-run
--- PASS: TestErrorSpam/start (1.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.49s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 status
--- PASS: TestErrorSpam/status (0.49s)

                                                
                                    
x
+
TestErrorSpam/pause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 pause
--- PASS: TestErrorSpam/pause (1.36s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 unpause
--- PASS: TestErrorSpam/unpause (1.33s)

                                                
                                    
x
+
TestErrorSpam/stop (153.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 stop: (3.389766784s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 stop: (1m15.200706464s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 stop
E0719 07:29:29.230368    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:29.237676    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:29.248572    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:29.269633    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:29.311962    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:29.394145    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:29.554873    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:29.877079    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:30.517356    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:31.798987    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:34.360068    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:39.481201    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:29:49.723524    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-457000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-457000 stop: (1m15.220241211s)
--- PASS: TestErrorSpam/stop (153.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19302-1032/.minikube/files/etc/test/nested/copy/1560/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-638000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0719 07:30:10.204045    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:30:51.164476    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-638000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m31.641809436s)
--- PASS: TestFunctional/serial/StartWithProxy (91.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.33s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-638000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-638000 --alsologtostderr -v=8: (38.328413386s)
functional_test.go:659: soft start took 38.328847487s for "functional-638000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.33s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-638000 get po -A
E0719 07:32:13.085740    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-638000 cache add registry.k8s.io/pause:3.1: (1.203799088s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4184571814/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 cache add minikube-local-cache-test:functional-638000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 cache delete minikube-local-cache-test:functional-638000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-638000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-638000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (146.89875ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 kubectl -- --context functional-638000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-638000 kubectl -- --context functional-638000 get pods: (1.15472596s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-638000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-638000 get pods: (1.441112155s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.44s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-638000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-638000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.638030617s)
functional_test.go:757: restart took 41.638150276s for "functional-638000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-638000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-638000 logs: (2.674286345s)
--- PASS: TestFunctional/serial/LogsCmd (2.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1260251438/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-638000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1260251438/001/logs.txt: (2.736926218s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-638000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-638000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-638000: exit status 115 (270.579836ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:31726 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-638000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-638000 config get cpus: exit status 14 (71.828732ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-638000 config get cpus: exit status 14 (54.693053ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-638000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-638000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3045: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-638000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-638000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (795.488388ms)

                                                
                                                
-- stdout --
	* [functional-638000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:34:15.358366    3016 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:34:15.358539    3016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:34:15.358544    3016 out.go:304] Setting ErrFile to fd 2...
	I0719 07:34:15.358548    3016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:34:15.358724    3016 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	I0719 07:34:15.360151    3016 out.go:298] Setting JSON to false
	I0719 07:34:15.382763    3016 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2026,"bootTime":1721397629,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0719 07:34:15.382858    3016 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:34:15.404745    3016 out.go:177] * [functional-638000] minikube v1.33.1 on Darwin 14.5
	I0719 07:34:15.448324    3016 notify.go:220] Checking for updates...
	I0719 07:34:15.469444    3016 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:34:15.547904    3016 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	I0719 07:34:15.622039    3016 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 07:34:15.679955    3016 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:34:15.753887    3016 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	I0719 07:34:15.812081    3016 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:34:15.849839    3016 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:34:15.850605    3016 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:34:15.850683    3016 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:34:15.860464    3016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51069
	I0719 07:34:15.860841    3016 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:34:15.861262    3016 main.go:141] libmachine: Using API Version  1
	I0719 07:34:15.861277    3016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:34:15.861526    3016 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:34:15.861640    3016 main.go:141] libmachine: (functional-638000) Calling .DriverName
	I0719 07:34:15.861840    3016 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:34:15.862104    3016 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:34:15.862128    3016 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:34:15.870852    3016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51071
	I0719 07:34:15.871227    3016 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:34:15.871602    3016 main.go:141] libmachine: Using API Version  1
	I0719 07:34:15.871619    3016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:34:15.871847    3016 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:34:15.871956    3016 main.go:141] libmachine: (functional-638000) Calling .DriverName
	I0719 07:34:15.902216    3016 out.go:177] * Using the hyperkit driver based on existing profile
	I0719 07:34:15.960073    3016 start.go:297] selected driver: hyperkit
	I0719 07:34:15.960099    3016 start.go:901] validating driver "hyperkit" against &{Name:functional-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:34:15.960298    3016 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:34:15.984971    3016 out.go:177] 
	W0719 07:34:16.005816    3016 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 07:34:16.063924    3016 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-638000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-638000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-638000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (614.539831ms)

                                                
                                                
-- stdout --
	* [functional-638000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:34:16.649706    3032 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:34:16.649860    3032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:34:16.649865    3032 out.go:304] Setting ErrFile to fd 2...
	I0719 07:34:16.649869    3032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:34:16.650078    3032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	I0719 07:34:16.651704    3032 out.go:298] Setting JSON to false
	I0719 07:34:16.674464    3032 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2027,"bootTime":1721397629,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0719 07:34:16.674555    3032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:34:16.695842    3032 out.go:177] * [functional-638000] minikube v1.33.1 sur Darwin 14.5
	I0719 07:34:16.738067    3032 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:34:16.738149    3032 notify.go:220] Checking for updates...
	I0719 07:34:16.779870    3032 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	I0719 07:34:16.800863    3032 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 07:34:16.821792    3032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:34:16.842928    3032 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	I0719 07:34:16.864090    3032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:34:16.901661    3032 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:34:16.902374    3032 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:34:16.902450    3032 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:34:16.912217    3032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51079
	I0719 07:34:16.912590    3032 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:34:16.913027    3032 main.go:141] libmachine: Using API Version  1
	I0719 07:34:16.913037    3032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:34:16.913327    3032 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:34:16.913457    3032 main.go:141] libmachine: (functional-638000) Calling .DriverName
	I0719 07:34:16.913675    3032 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:34:16.913935    3032 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:34:16.913964    3032 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:34:16.922654    3032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51081
	I0719 07:34:16.923004    3032 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:34:16.923368    3032 main.go:141] libmachine: Using API Version  1
	I0719 07:34:16.923385    3032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:34:16.923591    3032 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:34:16.923709    3032 main.go:141] libmachine: (functional-638000) Calling .DriverName
	I0719 07:34:16.969935    3032 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0719 07:34:17.043945    3032 start.go:297] selected driver: hyperkit
	I0719 07:34:17.043968    3032 start.go:901] validating driver "hyperkit" against &{Name:functional-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:34:17.044188    3032 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:34:17.085112    3032 out.go:177] 
	W0719 07:34:17.121890    3032 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 07:34:17.142861    3032 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-638000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-638000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-s6xf8" [aedf5f7e-8fce-4a44-a676-6536daac8fc6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-s6xf8" [aedf5f7e-8fce-4a44-a676-6536daac8fc6] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.012984927s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.4:32486
functional_test.go:1671: http://192.169.0.4:32486: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-s6xf8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:32486
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ca864f6e-1ce1-478f-9a10-13273a1adb8c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005519159s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-638000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-638000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-638000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-638000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [68e20bc3-f34a-4343-94da-3f33db9924c5] Pending
helpers_test.go:344: "sp-pod" [68e20bc3-f34a-4343-94da-3f33db9924c5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [68e20bc3-f34a-4343-94da-3f33db9924c5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004508332s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-638000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-638000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-638000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [43d897b6-4dd0-47f6-b514-96c2a2c129a7] Pending
helpers_test.go:344: "sp-pod" [43d897b6-4dd0-47f6-b514-96c2a2c129a7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [43d897b6-4dd0-47f6-b514-96c2a2c129a7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003556188s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-638000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh -n functional-638000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 cp functional-638000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd125592842/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh -n functional-638000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh -n functional-638000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-638000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-rmr2q" [3827fdea-655e-42dd-81ac-ab4f2bf87b55] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-rmr2q" [3827fdea-655e-42dd-81ac-ab4f2bf87b55] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.00251499s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-638000 exec mysql-64454c8b5c-rmr2q -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-638000 exec mysql-64454c8b5c-rmr2q -- mysql -ppassword -e "show databases;": exit status 1 (112.501484ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-638000 exec mysql-64454c8b5c-rmr2q -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-638000 exec mysql-64454c8b5c-rmr2q -- mysql -ppassword -e "show databases;": exit status 1 (160.289906ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-638000 exec mysql-64454c8b5c-rmr2q -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-638000 exec mysql-64454c8b5c-rmr2q -- mysql -ppassword -e "show databases;": exit status 1 (107.631652ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-638000 exec mysql-64454c8b5c-rmr2q -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1560/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "sudo cat /etc/test/nested/copy/1560/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1560.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "sudo cat /etc/ssl/certs/1560.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1560.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "sudo cat /usr/share/ca-certificates/1560.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "sudo cat /etc/ssl/certs/15602.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "sudo cat /usr/share/ca-certificates/15602.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-638000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-638000 ssh "sudo systemctl is-active crio": exit status 1 (201.355518ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-638000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-638000
docker.io/kicbase/echo-server:functional-638000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-638000 image ls --format short --alsologtostderr:
I0719 07:34:24.444099    3065 out.go:291] Setting OutFile to fd 1 ...
I0719 07:34:24.444406    3065 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:34:24.444412    3065 out.go:304] Setting ErrFile to fd 2...
I0719 07:34:24.444416    3065 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:34:24.444600    3065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
I0719 07:34:24.445288    3065 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:34:24.445393    3065 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:34:24.445806    3065 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 07:34:24.445854    3065 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 07:34:24.454972    3065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51141
I0719 07:34:24.455414    3065 main.go:141] libmachine: () Calling .GetVersion
I0719 07:34:24.455841    3065 main.go:141] libmachine: Using API Version  1
I0719 07:34:24.455862    3065 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 07:34:24.456108    3065 main.go:141] libmachine: () Calling .GetMachineName
I0719 07:34:24.456219    3065 main.go:141] libmachine: (functional-638000) Calling .GetState
I0719 07:34:24.456315    3065 main.go:141] libmachine: (functional-638000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0719 07:34:24.456391    3065 main.go:141] libmachine: (functional-638000) DBG | hyperkit pid from json: 2349
I0719 07:34:24.457609    3065 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 07:34:24.457634    3065 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 07:34:24.466146    3065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51143
I0719 07:34:24.466505    3065 main.go:141] libmachine: () Calling .GetVersion
I0719 07:34:24.466894    3065 main.go:141] libmachine: Using API Version  1
I0719 07:34:24.466914    3065 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 07:34:24.467145    3065 main.go:141] libmachine: () Calling .GetMachineName
I0719 07:34:24.467258    3065 main.go:141] libmachine: (functional-638000) Calling .DriverName
I0719 07:34:24.467436    3065 ssh_runner.go:195] Run: systemctl --version
I0719 07:34:24.467457    3065 main.go:141] libmachine: (functional-638000) Calling .GetSSHHostname
I0719 07:34:24.467557    3065 main.go:141] libmachine: (functional-638000) Calling .GetSSHPort
I0719 07:34:24.467667    3065 main.go:141] libmachine: (functional-638000) Calling .GetSSHKeyPath
I0719 07:34:24.467772    3065 main.go:141] libmachine: (functional-638000) Calling .GetSSHUsername
I0719 07:34:24.467871    3065 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/functional-638000/id_rsa Username:docker}
I0719 07:34:24.511007    3065 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0719 07:34:24.538929    3065 main.go:141] libmachine: Making call to close driver server
I0719 07:34:24.538942    3065 main.go:141] libmachine: (functional-638000) Calling .Close
I0719 07:34:24.539080    3065 main.go:141] libmachine: Successfully made call to close driver server
I0719 07:34:24.539089    3065 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 07:34:24.539100    3065 main.go:141] libmachine: Making call to close driver server
I0719 07:34:24.539106    3065 main.go:141] libmachine: (functional-638000) Calling .Close
I0719 07:34:24.539109    3065 main.go:141] libmachine: (functional-638000) DBG | Closing plugin on server side
I0719 07:34:24.539238    3065 main.go:141] libmachine: Successfully made call to close driver server
I0719 07:34:24.539245    3065 main.go:141] libmachine: (functional-638000) DBG | Closing plugin on server side
I0719 07:34:24.539248    3065 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-638000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-638000 | 1c8aa8800c821 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kicbase/echo-server               | functional-638000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-638000 image ls --format table --alsologtostderr:
I0719 07:34:24.932757    3078 out.go:291] Setting OutFile to fd 1 ...
I0719 07:34:24.932953    3078 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:34:24.932958    3078 out.go:304] Setting ErrFile to fd 2...
I0719 07:34:24.932963    3078 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:34:24.933154    3078 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
I0719 07:34:24.933765    3078 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:34:24.933860    3078 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:34:24.934197    3078 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 07:34:24.934240    3078 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 07:34:24.942940    3078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51156
I0719 07:34:24.943349    3078 main.go:141] libmachine: () Calling .GetVersion
I0719 07:34:24.943769    3078 main.go:141] libmachine: Using API Version  1
I0719 07:34:24.943802    3078 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 07:34:24.944041    3078 main.go:141] libmachine: () Calling .GetMachineName
I0719 07:34:24.944164    3078 main.go:141] libmachine: (functional-638000) Calling .GetState
I0719 07:34:24.944255    3078 main.go:141] libmachine: (functional-638000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0719 07:34:24.944323    3078 main.go:141] libmachine: (functional-638000) DBG | hyperkit pid from json: 2349
I0719 07:34:24.945531    3078 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 07:34:24.945557    3078 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 07:34:24.954228    3078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51158
I0719 07:34:24.954646    3078 main.go:141] libmachine: () Calling .GetVersion
I0719 07:34:24.955005    3078 main.go:141] libmachine: Using API Version  1
I0719 07:34:24.955014    3078 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 07:34:24.955254    3078 main.go:141] libmachine: () Calling .GetMachineName
I0719 07:34:24.955378    3078 main.go:141] libmachine: (functional-638000) Calling .DriverName
I0719 07:34:24.955536    3078 ssh_runner.go:195] Run: systemctl --version
I0719 07:34:24.955556    3078 main.go:141] libmachine: (functional-638000) Calling .GetSSHHostname
I0719 07:34:24.955644    3078 main.go:141] libmachine: (functional-638000) Calling .GetSSHPort
I0719 07:34:24.955717    3078 main.go:141] libmachine: (functional-638000) Calling .GetSSHKeyPath
I0719 07:34:24.955804    3078 main.go:141] libmachine: (functional-638000) Calling .GetSSHUsername
I0719 07:34:24.955898    3078 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/functional-638000/id_rsa Username:docker}
I0719 07:34:24.993546    3078 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0719 07:34:25.012101    3078 main.go:141] libmachine: Making call to close driver server
I0719 07:34:25.012110    3078 main.go:141] libmachine: (functional-638000) Calling .Close
I0719 07:34:25.012278    3078 main.go:141] libmachine: Successfully made call to close driver server
I0719 07:34:25.012287    3078 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 07:34:25.012292    3078 main.go:141] libmachine: Making call to close driver server
I0719 07:34:25.012297    3078 main.go:141] libmachine: (functional-638000) Calling .Close
I0719 07:34:25.012298    3078 main.go:141] libmachine: (functional-638000) DBG | Closing plugin on server side
I0719 07:34:25.012427    3078 main.go:141] libmachine: Successfully made call to close driver server
I0719 07:34:25.012446    3078 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 07:34:25.012455    3078 main.go:141] libmachine: (functional-638000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-638000 image ls --format json --alsologtostderr:
[{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-638000"],"size":"4940000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd90
6b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"1c8aa8800c8213c0d6b7aa132e96531069a503a366bf49dbcabec91ad97b4e78","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-638000"],"size":"30"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigest
s":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-638000 image ls --format json --alsologtostderr:
I0719 07:34:24.617337    3070 out.go:291] Setting OutFile to fd 1 ...
I0719 07:34:24.617631    3070 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:34:24.617637    3070 out.go:304] Setting ErrFile to fd 2...
I0719 07:34:24.617641    3070 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:34:24.617822    3070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
I0719 07:34:24.618398    3070 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:34:24.618494    3070 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:34:24.618842    3070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 07:34:24.618883    3070 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 07:34:24.627514    3070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51146
I0719 07:34:24.627965    3070 main.go:141] libmachine: () Calling .GetVersion
I0719 07:34:24.628381    3070 main.go:141] libmachine: Using API Version  1
I0719 07:34:24.628391    3070 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 07:34:24.628632    3070 main.go:141] libmachine: () Calling .GetMachineName
I0719 07:34:24.628753    3070 main.go:141] libmachine: (functional-638000) Calling .GetState
I0719 07:34:24.628834    3070 main.go:141] libmachine: (functional-638000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0719 07:34:24.628909    3070 main.go:141] libmachine: (functional-638000) DBG | hyperkit pid from json: 2349
I0719 07:34:24.630106    3070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 07:34:24.630131    3070 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 07:34:24.638493    3070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51148
I0719 07:34:24.638861    3070 main.go:141] libmachine: () Calling .GetVersion
I0719 07:34:24.639219    3070 main.go:141] libmachine: Using API Version  1
I0719 07:34:24.639237    3070 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 07:34:24.639492    3070 main.go:141] libmachine: () Calling .GetMachineName
I0719 07:34:24.639617    3070 main.go:141] libmachine: (functional-638000) Calling .DriverName
I0719 07:34:24.639788    3070 ssh_runner.go:195] Run: systemctl --version
I0719 07:34:24.639808    3070 main.go:141] libmachine: (functional-638000) Calling .GetSSHHostname
I0719 07:34:24.639904    3070 main.go:141] libmachine: (functional-638000) Calling .GetSSHPort
I0719 07:34:24.639981    3070 main.go:141] libmachine: (functional-638000) Calling .GetSSHKeyPath
I0719 07:34:24.640069    3070 main.go:141] libmachine: (functional-638000) Calling .GetSSHUsername
I0719 07:34:24.640165    3070 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/functional-638000/id_rsa Username:docker}
I0719 07:34:24.675220    3070 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0719 07:34:24.694702    3070 main.go:141] libmachine: Making call to close driver server
I0719 07:34:24.694710    3070 main.go:141] libmachine: (functional-638000) Calling .Close
I0719 07:34:24.694872    3070 main.go:141] libmachine: Successfully made call to close driver server
I0719 07:34:24.694881    3070 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 07:34:24.694889    3070 main.go:141] libmachine: Making call to close driver server
I0719 07:34:24.694891    3070 main.go:141] libmachine: (functional-638000) DBG | Closing plugin on server side
I0719 07:34:24.694893    3070 main.go:141] libmachine: (functional-638000) Calling .Close
I0719 07:34:24.695060    3070 main.go:141] libmachine: Successfully made call to close driver server
I0719 07:34:24.695071    3070 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 07:34:24.695081    3070 main.go:141] libmachine: (functional-638000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-638000 image ls --format yaml --alsologtostderr:
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 1c8aa8800c8213c0d6b7aa132e96531069a503a366bf49dbcabec91ad97b4e78
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-638000
size: "30"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-638000
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-638000 image ls --format yaml --alsologtostderr:
I0719 07:34:24.772407    3074 out.go:291] Setting OutFile to fd 1 ...
I0719 07:34:24.772691    3074 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:34:24.772696    3074 out.go:304] Setting ErrFile to fd 2...
I0719 07:34:24.772700    3074 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:34:24.772893    3074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
I0719 07:34:24.773510    3074 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:34:24.773607    3074 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:34:24.773968    3074 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 07:34:24.774011    3074 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 07:34:24.782529    3074 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51151
I0719 07:34:24.782973    3074 main.go:141] libmachine: () Calling .GetVersion
I0719 07:34:24.783398    3074 main.go:141] libmachine: Using API Version  1
I0719 07:34:24.783410    3074 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 07:34:24.783673    3074 main.go:141] libmachine: () Calling .GetMachineName
I0719 07:34:24.783791    3074 main.go:141] libmachine: (functional-638000) Calling .GetState
I0719 07:34:24.783890    3074 main.go:141] libmachine: (functional-638000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0719 07:34:24.783949    3074 main.go:141] libmachine: (functional-638000) DBG | hyperkit pid from json: 2349
I0719 07:34:24.785151    3074 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 07:34:24.785182    3074 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 07:34:24.793777    3074 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51153
I0719 07:34:24.794143    3074 main.go:141] libmachine: () Calling .GetVersion
I0719 07:34:24.794465    3074 main.go:141] libmachine: Using API Version  1
I0719 07:34:24.794476    3074 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 07:34:24.794685    3074 main.go:141] libmachine: () Calling .GetMachineName
I0719 07:34:24.794792    3074 main.go:141] libmachine: (functional-638000) Calling .DriverName
I0719 07:34:24.794962    3074 ssh_runner.go:195] Run: systemctl --version
I0719 07:34:24.794981    3074 main.go:141] libmachine: (functional-638000) Calling .GetSSHHostname
I0719 07:34:24.795063    3074 main.go:141] libmachine: (functional-638000) Calling .GetSSHPort
I0719 07:34:24.795150    3074 main.go:141] libmachine: (functional-638000) Calling .GetSSHKeyPath
I0719 07:34:24.795241    3074 main.go:141] libmachine: (functional-638000) Calling .GetSSHUsername
I0719 07:34:24.795325    3074 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/functional-638000/id_rsa Username:docker}
I0719 07:34:24.830273    3074 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0719 07:34:24.853645    3074 main.go:141] libmachine: Making call to close driver server
I0719 07:34:24.853655    3074 main.go:141] libmachine: (functional-638000) Calling .Close
I0719 07:34:24.853822    3074 main.go:141] libmachine: Successfully made call to close driver server
I0719 07:34:24.853824    3074 main.go:141] libmachine: (functional-638000) DBG | Closing plugin on server side
I0719 07:34:24.853830    3074 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 07:34:24.853837    3074 main.go:141] libmachine: Making call to close driver server
I0719 07:34:24.853841    3074 main.go:141] libmachine: (functional-638000) Calling .Close
I0719 07:34:24.854073    3074 main.go:141] libmachine: Successfully made call to close driver server
I0719 07:34:24.854081    3074 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 07:34:24.854105    3074 main.go:141] libmachine: (functional-638000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-638000 ssh pgrep buildkitd: exit status 1 (128.082897ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image build -t localhost/my-image:functional-638000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-638000 image build -t localhost/my-image:functional-638000 testdata/build --alsologtostderr: (1.74954594s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-638000 image build -t localhost/my-image:functional-638000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in b0ac3a43d74c
---> Removed intermediate container b0ac3a43d74c
---> d679442e0d69
Step 3/3 : ADD content.txt /
---> c4cd9a0ab079
Successfully built c4cd9a0ab079
Successfully tagged localhost/my-image:functional-638000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-638000 image build -t localhost/my-image:functional-638000 testdata/build --alsologtostderr:
I0719 07:34:25.219960    3087 out.go:291] Setting OutFile to fd 1 ...
I0719 07:34:25.220230    3087 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:34:25.220237    3087 out.go:304] Setting ErrFile to fd 2...
I0719 07:34:25.220240    3087 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:34:25.220432    3087 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
I0719 07:34:25.221131    3087 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:34:25.221798    3087 config.go:182] Loaded profile config "functional-638000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:34:25.222197    3087 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 07:34:25.222237    3087 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 07:34:25.230721    3087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51168
I0719 07:34:25.231153    3087 main.go:141] libmachine: () Calling .GetVersion
I0719 07:34:25.231596    3087 main.go:141] libmachine: Using API Version  1
I0719 07:34:25.231610    3087 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 07:34:25.231852    3087 main.go:141] libmachine: () Calling .GetMachineName
I0719 07:34:25.231988    3087 main.go:141] libmachine: (functional-638000) Calling .GetState
I0719 07:34:25.232089    3087 main.go:141] libmachine: (functional-638000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0719 07:34:25.232159    3087 main.go:141] libmachine: (functional-638000) DBG | hyperkit pid from json: 2349
I0719 07:34:25.233369    3087 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 07:34:25.233395    3087 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 07:34:25.242114    3087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51170
I0719 07:34:25.242476    3087 main.go:141] libmachine: () Calling .GetVersion
I0719 07:34:25.242818    3087 main.go:141] libmachine: Using API Version  1
I0719 07:34:25.242827    3087 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 07:34:25.243040    3087 main.go:141] libmachine: () Calling .GetMachineName
I0719 07:34:25.243152    3087 main.go:141] libmachine: (functional-638000) Calling .DriverName
I0719 07:34:25.243312    3087 ssh_runner.go:195] Run: systemctl --version
I0719 07:34:25.243337    3087 main.go:141] libmachine: (functional-638000) Calling .GetSSHHostname
I0719 07:34:25.243420    3087 main.go:141] libmachine: (functional-638000) Calling .GetSSHPort
I0719 07:34:25.243501    3087 main.go:141] libmachine: (functional-638000) Calling .GetSSHKeyPath
I0719 07:34:25.243582    3087 main.go:141] libmachine: (functional-638000) Calling .GetSSHUsername
I0719 07:34:25.243665    3087 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/functional-638000/id_rsa Username:docker}
I0719 07:34:25.286789    3087 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.4285815206.tar
I0719 07:34:25.286876    3087 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0719 07:34:25.295750    3087 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4285815206.tar
I0719 07:34:25.302246    3087 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4285815206.tar: stat -c "%s %y" /var/lib/minikube/build/build.4285815206.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4285815206.tar': No such file or directory
I0719 07:34:25.302282    3087 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.4285815206.tar --> /var/lib/minikube/build/build.4285815206.tar (3072 bytes)
I0719 07:34:25.339659    3087 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4285815206
I0719 07:34:25.360502    3087 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4285815206 -xf /var/lib/minikube/build/build.4285815206.tar
I0719 07:34:25.370055    3087 docker.go:360] Building image: /var/lib/minikube/build/build.4285815206
I0719 07:34:25.370134    3087 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-638000 /var/lib/minikube/build/build.4285815206
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0719 07:34:26.871531    3087 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-638000 /var/lib/minikube/build/build.4285815206: (1.501385706s)
I0719 07:34:26.871597    3087 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4285815206
I0719 07:34:26.879863    3087 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4285815206.tar
I0719 07:34:26.887686    3087 build_images.go:217] Built localhost/my-image:functional-638000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.4285815206.tar
I0719 07:34:26.887726    3087 build_images.go:133] succeeded building to: functional-638000
I0719 07:34:26.887731    3087 build_images.go:134] failed building to: 
I0719 07:34:26.887746    3087 main.go:141] libmachine: Making call to close driver server
I0719 07:34:26.887753    3087 main.go:141] libmachine: (functional-638000) Calling .Close
I0719 07:34:26.887900    3087 main.go:141] libmachine: Successfully made call to close driver server
I0719 07:34:26.887912    3087 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 07:34:26.887919    3087 main.go:141] libmachine: Making call to close driver server
I0719 07:34:26.887934    3087 main.go:141] libmachine: (functional-638000) Calling .Close
I0719 07:34:26.887935    3087 main.go:141] libmachine: (functional-638000) DBG | Closing plugin on server side
I0719 07:34:26.888072    3087 main.go:141] libmachine: Successfully made call to close driver server
I0719 07:34:26.888087    3087 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 07:34:26.888103    3087 main.go:141] libmachine: (functional-638000) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image ls
2024/07/19 07:34:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.823156723s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-638000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-638000 docker-env) && out/minikube-darwin-amd64 status -p functional-638000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-638000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image load --daemon docker.io/kicbase/echo-server:functional-638000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image load --daemon docker.io/kicbase/echo-server:functional-638000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-638000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image load --daemon docker.io/kicbase/echo-server:functional-638000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image save docker.io/kicbase/echo-server:functional-638000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image rm docker.io/kicbase/echo-server:functional-638000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-638000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 image save --daemon docker.io/kicbase/echo-server:functional-638000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-638000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-638000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-638000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-c2php" [93f4c0aa-44fa-4ea0-ab8a-e08ceabf2eca] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-c2php" [93f4c0aa-44fa-4ea0-ab8a-e08ceabf2eca] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.005597851s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 service list -o json
functional_test.go:1490: Took "181.690808ms" to run "out/minikube-darwin-amd64 -p functional-638000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.4:31996
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.4:31996
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-638000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-638000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-638000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2800: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-638000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-638000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-638000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [95d6c2af-c535-40bb-a43c-c617dcb3d72c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [95d6c2af-c535-40bb-a43c-c617dcb3d72c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004546s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-638000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.186.210 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-638000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "184.903159ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "76.326724ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "185.534737ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "76.29533ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2662087106/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721399643729164000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2662087106/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721399643729164000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2662087106/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721399643729164000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2662087106/001/test-1721399643729164000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (146.664652ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 14:34 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 14:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 14:34 test-1721399643729164000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh cat /mount-9p/test-1721399643729164000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-638000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c1d6e710-ae51-4820-92f1-e96e6a9034f4] Pending
helpers_test.go:344: "busybox-mount" [c1d6e710-ae51-4820-92f1-e96e6a9034f4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c1d6e710-ae51-4820-92f1-e96e6a9034f4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c1d6e710-ae51-4820-92f1-e96e6a9034f4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004358123s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-638000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2662087106/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port405394542/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (155.558009ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port405394542/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-638000 ssh "sudo umount -f /mount-9p": exit status 1 (127.385577ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-638000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port405394542/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1352220440/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1352220440/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1352220440/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T" /mount1: exit status 1 (158.14377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T" /mount1: exit status 1 (204.267213ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-638000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-638000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1352220440/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1352220440/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-638000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1352220440/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-638000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-638000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-638000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (319.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-325000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0719 07:34:56.927638    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:38:16.596182    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:16.601282    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:16.612149    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:16.633299    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:16.673487    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:16.755075    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:16.916460    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:17.237962    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:17.879413    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:19.160487    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:21.721311    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:26.842825    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:37.083277    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:38:57.564445    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:39:29.244821    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:39:38.557883    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-325000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (5m18.700489994s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (319.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-325000 -- rollout status deployment/busybox: (2.635021658s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-29h67 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-c4z99 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-qj8fb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-29h67 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-c4z99 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-qj8fb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-29h67 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-c4z99 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-qj8fb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-29h67 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-29h67 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-c4z99 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-c4z99 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-qj8fb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-325000 -- exec busybox-fc5497c4f-qj8fb -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-325000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-325000 -v=7 --alsologtostderr: (51.27196456s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-325000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (8.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp testdata/cp-test.txt ha-325000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile287456794/001/cp-test_ha-325000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000:/home/docker/cp-test.txt ha-325000-m02:/home/docker/cp-test_ha-325000_ha-325000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m02 "sudo cat /home/docker/cp-test_ha-325000_ha-325000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000:/home/docker/cp-test.txt ha-325000-m03:/home/docker/cp-test_ha-325000_ha-325000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m03 "sudo cat /home/docker/cp-test_ha-325000_ha-325000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000:/home/docker/cp-test.txt ha-325000-m04:/home/docker/cp-test_ha-325000_ha-325000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m04 "sudo cat /home/docker/cp-test_ha-325000_ha-325000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp testdata/cp-test.txt ha-325000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile287456794/001/cp-test_ha-325000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m02:/home/docker/cp-test.txt ha-325000:/home/docker/cp-test_ha-325000-m02_ha-325000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000 "sudo cat /home/docker/cp-test_ha-325000-m02_ha-325000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m02:/home/docker/cp-test.txt ha-325000-m03:/home/docker/cp-test_ha-325000-m02_ha-325000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m03 "sudo cat /home/docker/cp-test_ha-325000-m02_ha-325000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m02:/home/docker/cp-test.txt ha-325000-m04:/home/docker/cp-test_ha-325000-m02_ha-325000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m04 "sudo cat /home/docker/cp-test_ha-325000-m02_ha-325000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp testdata/cp-test.txt ha-325000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile287456794/001/cp-test_ha-325000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m03:/home/docker/cp-test.txt ha-325000:/home/docker/cp-test_ha-325000-m03_ha-325000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000 "sudo cat /home/docker/cp-test_ha-325000-m03_ha-325000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m03:/home/docker/cp-test.txt ha-325000-m02:/home/docker/cp-test_ha-325000-m03_ha-325000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m02 "sudo cat /home/docker/cp-test_ha-325000-m03_ha-325000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m03:/home/docker/cp-test.txt ha-325000-m04:/home/docker/cp-test_ha-325000-m03_ha-325000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m04 "sudo cat /home/docker/cp-test_ha-325000-m03_ha-325000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp testdata/cp-test.txt ha-325000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile287456794/001/cp-test_ha-325000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m04:/home/docker/cp-test.txt ha-325000:/home/docker/cp-test_ha-325000-m04_ha-325000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000 "sudo cat /home/docker/cp-test_ha-325000-m04_ha-325000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m04:/home/docker/cp-test.txt ha-325000-m02:/home/docker/cp-test_ha-325000-m04_ha-325000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m02 "sudo cat /home/docker/cp-test_ha-325000-m04_ha-325000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 cp ha-325000-m04:/home/docker/cp-test.txt ha-325000-m03:/home/docker/cp-test_ha-325000-m04_ha-325000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 ssh -n ha-325000-m03 "sudo cat /home/docker/cp-test_ha-325000-m04_ha-325000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (8.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 node stop m02 -v=7 --alsologtostderr
E0719 07:41:00.496678    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-325000 node stop m02 -v=7 --alsologtostderr: (8.329512631s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-325000 status -v=7 --alsologtostderr: exit status 7 (346.570219ms)

                                                
                                                
-- stdout --
	ha-325000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-325000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-325000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-325000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:41:07.534115    3562 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:41:07.534394    3562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:41:07.534400    3562 out.go:304] Setting ErrFile to fd 2...
	I0719 07:41:07.534406    3562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:41:07.534595    3562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	I0719 07:41:07.534771    3562 out.go:298] Setting JSON to false
	I0719 07:41:07.534793    3562 mustload.go:65] Loading cluster: ha-325000
	I0719 07:41:07.534835    3562 notify.go:220] Checking for updates...
	I0719 07:41:07.535127    3562 config.go:182] Loaded profile config "ha-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:41:07.535141    3562 status.go:255] checking status of ha-325000 ...
	I0719 07:41:07.535514    3562 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:41:07.535569    3562 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:41:07.544342    3562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51907
	I0719 07:41:07.544719    3562 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:41:07.545120    3562 main.go:141] libmachine: Using API Version  1
	I0719 07:41:07.545131    3562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:41:07.545395    3562 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:41:07.545524    3562 main.go:141] libmachine: (ha-325000) Calling .GetState
	I0719 07:41:07.545627    3562 main.go:141] libmachine: (ha-325000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 07:41:07.545709    3562 main.go:141] libmachine: (ha-325000) DBG | hyperkit pid from json: 3119
	I0719 07:41:07.546689    3562 status.go:330] ha-325000 host status = "Running" (err=<nil>)
	I0719 07:41:07.546706    3562 host.go:66] Checking if "ha-325000" exists ...
	I0719 07:41:07.546950    3562 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:41:07.546969    3562 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:41:07.555216    3562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51909
	I0719 07:41:07.555570    3562 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:41:07.555896    3562 main.go:141] libmachine: Using API Version  1
	I0719 07:41:07.555921    3562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:41:07.556172    3562 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:41:07.556289    3562 main.go:141] libmachine: (ha-325000) Calling .GetIP
	I0719 07:41:07.556375    3562 host.go:66] Checking if "ha-325000" exists ...
	I0719 07:41:07.556654    3562 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:41:07.556681    3562 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:41:07.566552    3562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51911
	I0719 07:41:07.566934    3562 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:41:07.567248    3562 main.go:141] libmachine: Using API Version  1
	I0719 07:41:07.567258    3562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:41:07.567451    3562 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:41:07.567558    3562 main.go:141] libmachine: (ha-325000) Calling .DriverName
	I0719 07:41:07.567713    3562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 07:41:07.567732    3562 main.go:141] libmachine: (ha-325000) Calling .GetSSHHostname
	I0719 07:41:07.567812    3562 main.go:141] libmachine: (ha-325000) Calling .GetSSHPort
	I0719 07:41:07.567892    3562 main.go:141] libmachine: (ha-325000) Calling .GetSSHKeyPath
	I0719 07:41:07.567971    3562 main.go:141] libmachine: (ha-325000) Calling .GetSSHUsername
	I0719 07:41:07.568065    3562 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/ha-325000/id_rsa Username:docker}
	I0719 07:41:07.596492    3562 ssh_runner.go:195] Run: systemctl --version
	I0719 07:41:07.605517    3562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 07:41:07.617878    3562 kubeconfig.go:125] found "ha-325000" server: "https://192.169.0.254:8443"
	I0719 07:41:07.617901    3562 api_server.go:166] Checking apiserver status ...
	I0719 07:41:07.617937    3562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:41:07.629872    3562 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1992/cgroup
	W0719 07:41:07.637764    3562 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1992/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 07:41:07.637815    3562 ssh_runner.go:195] Run: ls
	I0719 07:41:07.641257    3562 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0719 07:41:07.644455    3562 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0719 07:41:07.644466    3562 status.go:422] ha-325000 apiserver status = Running (err=<nil>)
	I0719 07:41:07.644475    3562 status.go:257] ha-325000 status: &{Name:ha-325000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 07:41:07.644488    3562 status.go:255] checking status of ha-325000-m02 ...
	I0719 07:41:07.644735    3562 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:41:07.644755    3562 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:41:07.653213    3562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51915
	I0719 07:41:07.653563    3562 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:41:07.653867    3562 main.go:141] libmachine: Using API Version  1
	I0719 07:41:07.653876    3562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:41:07.654097    3562 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:41:07.654213    3562 main.go:141] libmachine: (ha-325000-m02) Calling .GetState
	I0719 07:41:07.654293    3562 main.go:141] libmachine: (ha-325000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 07:41:07.654367    3562 main.go:141] libmachine: (ha-325000-m02) DBG | hyperkit pid from json: 3131
	I0719 07:41:07.655335    3562 main.go:141] libmachine: (ha-325000-m02) DBG | hyperkit pid 3131 missing from process table
	I0719 07:41:07.655359    3562 status.go:330] ha-325000-m02 host status = "Stopped" (err=<nil>)
	I0719 07:41:07.655367    3562 status.go:343] host is not running, skipping remaining checks
	I0719 07:41:07.655377    3562 status.go:257] ha-325000-m02 status: &{Name:ha-325000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 07:41:07.655400    3562 status.go:255] checking status of ha-325000-m03 ...
	I0719 07:41:07.655675    3562 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:41:07.655696    3562 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:41:07.664073    3562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51917
	I0719 07:41:07.664436    3562 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:41:07.664772    3562 main.go:141] libmachine: Using API Version  1
	I0719 07:41:07.664789    3562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:41:07.664994    3562 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:41:07.665103    3562 main.go:141] libmachine: (ha-325000-m03) Calling .GetState
	I0719 07:41:07.665203    3562 main.go:141] libmachine: (ha-325000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 07:41:07.665283    3562 main.go:141] libmachine: (ha-325000-m03) DBG | hyperkit pid from json: 3149
	I0719 07:41:07.666254    3562 status.go:330] ha-325000-m03 host status = "Running" (err=<nil>)
	I0719 07:41:07.666263    3562 host.go:66] Checking if "ha-325000-m03" exists ...
	I0719 07:41:07.666507    3562 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:41:07.666527    3562 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:41:07.675052    3562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51919
	I0719 07:41:07.675376    3562 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:41:07.675685    3562 main.go:141] libmachine: Using API Version  1
	I0719 07:41:07.675694    3562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:41:07.675948    3562 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:41:07.676070    3562 main.go:141] libmachine: (ha-325000-m03) Calling .GetIP
	I0719 07:41:07.676174    3562 host.go:66] Checking if "ha-325000-m03" exists ...
	I0719 07:41:07.676427    3562 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:41:07.676452    3562 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:41:07.684814    3562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51921
	I0719 07:41:07.685158    3562 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:41:07.685491    3562 main.go:141] libmachine: Using API Version  1
	I0719 07:41:07.685504    3562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:41:07.685740    3562 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:41:07.685857    3562 main.go:141] libmachine: (ha-325000-m03) Calling .DriverName
	I0719 07:41:07.685980    3562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 07:41:07.685992    3562 main.go:141] libmachine: (ha-325000-m03) Calling .GetSSHHostname
	I0719 07:41:07.686078    3562 main.go:141] libmachine: (ha-325000-m03) Calling .GetSSHPort
	I0719 07:41:07.686148    3562 main.go:141] libmachine: (ha-325000-m03) Calling .GetSSHKeyPath
	I0719 07:41:07.686260    3562 main.go:141] libmachine: (ha-325000-m03) Calling .GetSSHUsername
	I0719 07:41:07.686337    3562 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/ha-325000-m03/id_rsa Username:docker}
	I0719 07:41:07.720690    3562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 07:41:07.731658    3562 kubeconfig.go:125] found "ha-325000" server: "https://192.169.0.254:8443"
	I0719 07:41:07.731672    3562 api_server.go:166] Checking apiserver status ...
	I0719 07:41:07.731726    3562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:41:07.743247    3562 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2016/cgroup
	W0719 07:41:07.750949    3562 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2016/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 07:41:07.751006    3562 ssh_runner.go:195] Run: ls
	I0719 07:41:07.754318    3562 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0719 07:41:07.757547    3562 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0719 07:41:07.757558    3562 status.go:422] ha-325000-m03 apiserver status = Running (err=<nil>)
	I0719 07:41:07.757565    3562 status.go:257] ha-325000-m03 status: &{Name:ha-325000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 07:41:07.757575    3562 status.go:255] checking status of ha-325000-m04 ...
	I0719 07:41:07.757838    3562 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:41:07.757865    3562 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:41:07.766464    3562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51925
	I0719 07:41:07.766844    3562 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:41:07.767156    3562 main.go:141] libmachine: Using API Version  1
	I0719 07:41:07.767166    3562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:41:07.767390    3562 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:41:07.767499    3562 main.go:141] libmachine: (ha-325000-m04) Calling .GetState
	I0719 07:41:07.767593    3562 main.go:141] libmachine: (ha-325000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 07:41:07.767692    3562 main.go:141] libmachine: (ha-325000-m04) DBG | hyperkit pid from json: 3242
	I0719 07:41:07.768864    3562 status.go:330] ha-325000-m04 host status = "Running" (err=<nil>)
	I0719 07:41:07.768874    3562 host.go:66] Checking if "ha-325000-m04" exists ...
	I0719 07:41:07.769136    3562 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:41:07.769159    3562 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:41:07.777827    3562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51927
	I0719 07:41:07.778169    3562 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:41:07.778498    3562 main.go:141] libmachine: Using API Version  1
	I0719 07:41:07.778508    3562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:41:07.778713    3562 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:41:07.778828    3562 main.go:141] libmachine: (ha-325000-m04) Calling .GetIP
	I0719 07:41:07.778923    3562 host.go:66] Checking if "ha-325000-m04" exists ...
	I0719 07:41:07.779160    3562 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:41:07.779181    3562 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:41:07.787556    3562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51929
	I0719 07:41:07.787905    3562 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:41:07.788289    3562 main.go:141] libmachine: Using API Version  1
	I0719 07:41:07.788307    3562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:41:07.788514    3562 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:41:07.788625    3562 main.go:141] libmachine: (ha-325000-m04) Calling .DriverName
	I0719 07:41:07.788749    3562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 07:41:07.788766    3562 main.go:141] libmachine: (ha-325000-m04) Calling .GetSSHHostname
	I0719 07:41:07.788848    3562 main.go:141] libmachine: (ha-325000-m04) Calling .GetSSHPort
	I0719 07:41:07.788923    3562 main.go:141] libmachine: (ha-325000-m04) Calling .GetSSHKeyPath
	I0719 07:41:07.789001    3562 main.go:141] libmachine: (ha-325000-m04) Calling .GetSSHUsername
	I0719 07:41:07.789070    3562 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/ha-325000-m04/id_rsa Username:docker}
	I0719 07:41:07.816712    3562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 07:41:07.826669    3562 status.go:257] ha-325000-m04 status: &{Name:ha-325000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-325000 node start m02 -v=7 --alsologtostderr: (39.981843852s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (215.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-325000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-325000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-325000 -v=7 --alsologtostderr: (27.05441856s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-325000 --wait=true -v=7 --alsologtostderr
E0719 07:43:16.643272    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:43:44.336474    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:44:29.275486    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-325000 --wait=true -v=7 --alsologtostderr: (3m8.52179701s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-325000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (215.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-325000 node delete m03 -v=7 --alsologtostderr: (7.688930857s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 stop -v=7 --alsologtostderr
E0719 07:45:52.335948    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-325000 stop -v=7 --alsologtostderr: (24.837861843s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-325000 status -v=7 --alsologtostderr: exit status 7 (91.813912ms)

                                                
                                                
-- stdout --
	ha-325000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-325000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-325000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:57.899774    3722 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:57.900044    3722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:57.900050    3722 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:57.900054    3722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:57.900239    3722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	I0719 07:45:57.900408    3722 out.go:298] Setting JSON to false
	I0719 07:45:57.900428    3722 mustload.go:65] Loading cluster: ha-325000
	I0719 07:45:57.900467    3722 notify.go:220] Checking for updates...
	I0719 07:45:57.900758    3722 config.go:182] Loaded profile config "ha-325000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:57.900773    3722 status.go:255] checking status of ha-325000 ...
	I0719 07:45:57.901157    3722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:45:57.901203    3722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:45:57.910115    3722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52235
	I0719 07:45:57.910598    3722 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:45:57.911165    3722 main.go:141] libmachine: Using API Version  1
	I0719 07:45:57.911174    3722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:45:57.911503    3722 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:45:57.911636    3722 main.go:141] libmachine: (ha-325000) Calling .GetState
	I0719 07:45:57.911722    3722 main.go:141] libmachine: (ha-325000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 07:45:57.911790    3722 main.go:141] libmachine: (ha-325000) DBG | hyperkit pid from json: 3641
	I0719 07:45:57.912728    3722 main.go:141] libmachine: (ha-325000) DBG | hyperkit pid 3641 missing from process table
	I0719 07:45:57.912780    3722 status.go:330] ha-325000 host status = "Stopped" (err=<nil>)
	I0719 07:45:57.912790    3722 status.go:343] host is not running, skipping remaining checks
	I0719 07:45:57.912798    3722 status.go:257] ha-325000 status: &{Name:ha-325000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 07:45:57.912817    3722 status.go:255] checking status of ha-325000-m02 ...
	I0719 07:45:57.913079    3722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:45:57.913100    3722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:45:57.921929    3722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52237
	I0719 07:45:57.922431    3722 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:45:57.922822    3722 main.go:141] libmachine: Using API Version  1
	I0719 07:45:57.922860    3722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:45:57.923098    3722 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:45:57.923221    3722 main.go:141] libmachine: (ha-325000-m02) Calling .GetState
	I0719 07:45:57.923315    3722 main.go:141] libmachine: (ha-325000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 07:45:57.923395    3722 main.go:141] libmachine: (ha-325000-m02) DBG | hyperkit pid from json: 3648
	I0719 07:45:57.924306    3722 main.go:141] libmachine: (ha-325000-m02) DBG | hyperkit pid 3648 missing from process table
	I0719 07:45:57.924326    3722 status.go:330] ha-325000-m02 host status = "Stopped" (err=<nil>)
	I0719 07:45:57.924336    3722 status.go:343] host is not running, skipping remaining checks
	I0719 07:45:57.924343    3722 status.go:257] ha-325000-m02 status: &{Name:ha-325000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 07:45:57.924364    3722 status.go:255] checking status of ha-325000-m04 ...
	I0719 07:45:57.924652    3722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 07:45:57.924711    3722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 07:45:57.933118    3722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52239
	I0719 07:45:57.933446    3722 main.go:141] libmachine: () Calling .GetVersion
	I0719 07:45:57.933802    3722 main.go:141] libmachine: Using API Version  1
	I0719 07:45:57.933815    3722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 07:45:57.934031    3722 main.go:141] libmachine: () Calling .GetMachineName
	I0719 07:45:57.934205    3722 main.go:141] libmachine: (ha-325000-m04) Calling .GetState
	I0719 07:45:57.934279    3722 main.go:141] libmachine: (ha-325000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 07:45:57.934349    3722 main.go:141] libmachine: (ha-325000-m04) DBG | hyperkit pid from json: 3668
	I0719 07:45:57.935269    3722 main.go:141] libmachine: (ha-325000-m04) DBG | hyperkit pid 3668 missing from process table
	I0719 07:45:57.935284    3722 status.go:330] ha-325000-m04 host status = "Stopped" (err=<nil>)
	I0719 07:45:57.935292    3722 status.go:343] host is not running, skipping remaining checks
	I0719 07:45:57.935299    3722 status.go:257] ha-325000-m04 status: &{Name:ha-325000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (127.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-325000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-325000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (2m7.39616313s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (127.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-325000 --control-plane -v=7 --alsologtostderr
E0719 07:48:16.642577    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-325000 --control-plane -v=7 --alsologtostderr: (1m16.70017773s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-325000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.33s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (153.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-479000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-479000 --driver=hyperkit : (2m33.780437572s)
--- PASS: TestImageBuild/serial/Setup (153.78s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-479000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-479000: (1.330889503s)
--- PASS: TestImageBuild/serial/NormalBuild (1.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-479000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-479000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-479000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-136000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-136000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (52.173625037s)
--- PASS: TestJSONOutput/start/Command (52.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-136000 --output=json --user=testUser
E0719 07:53:16.640383    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-136000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-136000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-136000 --output=json --user=testUser: (8.332424259s)
--- PASS: TestJSONOutput/stop/Command (8.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-484000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-484000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (354.524026ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7390102a-42c0-4a41-aa56-bc4603bc7b0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-484000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f42a902d-ba5b-4a1d-bde3-fbb87243500e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"7be8d040-4292-4ef1-b51e-65c7c1a79e09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig"}}
	{"specversion":"1.0","id":"3e5fefce-9aca-4d4e-9cda-65eca08a3328","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"3ae589fd-0cef-4331-ab61-1e5404c47747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"339e19e6-0a1e-4773-89eb-7a712c9c2cd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube"}}
	{"specversion":"1.0","id":"713a750d-4c3d-4f77-baa2-1db25d539044","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"02b868c9-c37e-47e7-8dd6-9ae34f8a9b2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-484000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-484000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (92.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-298000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-298000 --driver=hyperkit : (40.655787742s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-300000 --driver=hyperkit 
E0719 07:54:29.273639    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 07:54:39.695665    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-300000 --driver=hyperkit : (40.58519958s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-298000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-300000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-300000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-300000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-300000: (5.240299013s)
helpers_test.go:175: Cleaning up "first-298000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-298000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-298000: (5.276382928s)
--- PASS: TestMinikubeProfile (92.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-565000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-565000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (20.370595897s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-565000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-565000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (488.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-791000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0719 07:58:16.592020    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 07:59:29.223151    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 08:02:32.278811    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 08:03:16.584796    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 08:04:29.214757    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-791000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (8m8.551452172s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (488.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-791000 -- rollout status deployment/busybox: (2.626760652s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- exec busybox-fc5497c4f-ffz7l -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- exec busybox-fc5497c4f-lfngx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- exec busybox-fc5497c4f-ffz7l -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- exec busybox-fc5497c4f-lfngx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- exec busybox-fc5497c4f-ffz7l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- exec busybox-fc5497c4f-lfngx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- exec busybox-fc5497c4f-ffz7l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- exec busybox-fc5497c4f-ffz7l -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- exec busybox-fc5497c4f-lfngx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-791000 -- exec busybox-fc5497c4f-lfngx -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-791000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-791000 -v 3 --alsologtostderr: (47.66166146s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.98s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-791000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp testdata/cp-test.txt multinode-791000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp multinode-791000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3095935599/001/cp-test_multinode-791000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp multinode-791000:/home/docker/cp-test.txt multinode-791000-m02:/home/docker/cp-test_multinode-791000_multinode-791000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m02 "sudo cat /home/docker/cp-test_multinode-791000_multinode-791000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp multinode-791000:/home/docker/cp-test.txt multinode-791000-m03:/home/docker/cp-test_multinode-791000_multinode-791000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m03 "sudo cat /home/docker/cp-test_multinode-791000_multinode-791000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp testdata/cp-test.txt multinode-791000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp multinode-791000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3095935599/001/cp-test_multinode-791000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp multinode-791000-m02:/home/docker/cp-test.txt multinode-791000:/home/docker/cp-test_multinode-791000-m02_multinode-791000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000 "sudo cat /home/docker/cp-test_multinode-791000-m02_multinode-791000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp multinode-791000-m02:/home/docker/cp-test.txt multinode-791000-m03:/home/docker/cp-test_multinode-791000-m02_multinode-791000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m03 "sudo cat /home/docker/cp-test_multinode-791000-m02_multinode-791000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp testdata/cp-test.txt multinode-791000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp multinode-791000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3095935599/001/cp-test_multinode-791000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp multinode-791000-m03:/home/docker/cp-test.txt multinode-791000:/home/docker/cp-test_multinode-791000-m03_multinode-791000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000 "sudo cat /home/docker/cp-test_multinode-791000-m03_multinode-791000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 cp multinode-791000-m03:/home/docker/cp-test.txt multinode-791000-m02:/home/docker/cp-test_multinode-791000-m03_multinode-791000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 ssh -n multinode-791000-m02 "sudo cat /home/docker/cp-test_multinode-791000-m03_multinode-791000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-791000 node stop m03: (2.347674452s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-791000 status: exit status 7 (252.07451ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-791000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-791000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-791000 status --alsologtostderr: exit status 7 (253.221268ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-791000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-791000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 08:05:57.419668    4853 out.go:291] Setting OutFile to fd 1 ...
	I0719 08:05:57.419856    4853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 08:05:57.419862    4853 out.go:304] Setting ErrFile to fd 2...
	I0719 08:05:57.419866    4853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 08:05:57.420039    4853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	I0719 08:05:57.420230    4853 out.go:298] Setting JSON to false
	I0719 08:05:57.420257    4853 mustload.go:65] Loading cluster: multinode-791000
	I0719 08:05:57.420297    4853 notify.go:220] Checking for updates...
	I0719 08:05:57.421304    4853 config.go:182] Loaded profile config "multinode-791000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 08:05:57.421367    4853 status.go:255] checking status of multinode-791000 ...
	I0719 08:05:57.421868    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:05:57.421916    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:05:57.430892    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53385
	I0719 08:05:57.431264    4853 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:05:57.431665    4853 main.go:141] libmachine: Using API Version  1
	I0719 08:05:57.431674    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:05:57.431880    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:05:57.432005    4853 main.go:141] libmachine: (multinode-791000) Calling .GetState
	I0719 08:05:57.432090    4853 main.go:141] libmachine: (multinode-791000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:05:57.432159    4853 main.go:141] libmachine: (multinode-791000) DBG | hyperkit pid from json: 4559
	I0719 08:05:57.433424    4853 status.go:330] multinode-791000 host status = "Running" (err=<nil>)
	I0719 08:05:57.433443    4853 host.go:66] Checking if "multinode-791000" exists ...
	I0719 08:05:57.433683    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:05:57.433705    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:05:57.442091    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53387
	I0719 08:05:57.442426    4853 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:05:57.442814    4853 main.go:141] libmachine: Using API Version  1
	I0719 08:05:57.442845    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:05:57.443054    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:05:57.443166    4853 main.go:141] libmachine: (multinode-791000) Calling .GetIP
	I0719 08:05:57.443238    4853 host.go:66] Checking if "multinode-791000" exists ...
	I0719 08:05:57.443478    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:05:57.443498    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:05:57.455213    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53389
	I0719 08:05:57.455579    4853 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:05:57.455881    4853 main.go:141] libmachine: Using API Version  1
	I0719 08:05:57.455892    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:05:57.456096    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:05:57.456219    4853 main.go:141] libmachine: (multinode-791000) Calling .DriverName
	I0719 08:05:57.456366    4853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 08:05:57.456387    4853 main.go:141] libmachine: (multinode-791000) Calling .GetSSHHostname
	I0719 08:05:57.456468    4853 main.go:141] libmachine: (multinode-791000) Calling .GetSSHPort
	I0719 08:05:57.456544    4853 main.go:141] libmachine: (multinode-791000) Calling .GetSSHKeyPath
	I0719 08:05:57.456620    4853 main.go:141] libmachine: (multinode-791000) Calling .GetSSHUsername
	I0719 08:05:57.456690    4853 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/multinode-791000/id_rsa Username:docker}
	I0719 08:05:57.490844    4853 ssh_runner.go:195] Run: systemctl --version
	I0719 08:05:57.495293    4853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 08:05:57.507166    4853 kubeconfig.go:125] found "multinode-791000" server: "https://192.169.0.17:8443"
	I0719 08:05:57.507189    4853 api_server.go:166] Checking apiserver status ...
	I0719 08:05:57.507224    4853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 08:05:57.518774    4853 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup
	W0719 08:05:57.526703    4853 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1971/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 08:05:57.526766    4853 ssh_runner.go:195] Run: ls
	I0719 08:05:57.529924    4853 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0719 08:05:57.532936    4853 api_server.go:279] https://192.169.0.17:8443/healthz returned 200:
	ok
	I0719 08:05:57.532946    4853 status.go:422] multinode-791000 apiserver status = Running (err=<nil>)
	I0719 08:05:57.532955    4853 status.go:257] multinode-791000 status: &{Name:multinode-791000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 08:05:57.532966    4853 status.go:255] checking status of multinode-791000-m02 ...
	I0719 08:05:57.533219    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:05:57.533239    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:05:57.541870    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53393
	I0719 08:05:57.542203    4853 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:05:57.542568    4853 main.go:141] libmachine: Using API Version  1
	I0719 08:05:57.542582    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:05:57.542796    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:05:57.542928    4853 main.go:141] libmachine: (multinode-791000-m02) Calling .GetState
	I0719 08:05:57.543010    4853 main.go:141] libmachine: (multinode-791000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:05:57.543079    4853 main.go:141] libmachine: (multinode-791000-m02) DBG | hyperkit pid from json: 4576
	I0719 08:05:57.544322    4853 status.go:330] multinode-791000-m02 host status = "Running" (err=<nil>)
	I0719 08:05:57.544330    4853 host.go:66] Checking if "multinode-791000-m02" exists ...
	I0719 08:05:57.544606    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:05:57.544628    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:05:57.553172    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53395
	I0719 08:05:57.553520    4853 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:05:57.553843    4853 main.go:141] libmachine: Using API Version  1
	I0719 08:05:57.553852    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:05:57.554041    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:05:57.554153    4853 main.go:141] libmachine: (multinode-791000-m02) Calling .GetIP
	I0719 08:05:57.554227    4853 host.go:66] Checking if "multinode-791000-m02" exists ...
	I0719 08:05:57.554480    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:05:57.554503    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:05:57.562792    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53397
	I0719 08:05:57.563145    4853 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:05:57.563447    4853 main.go:141] libmachine: Using API Version  1
	I0719 08:05:57.563459    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:05:57.563668    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:05:57.563786    4853 main.go:141] libmachine: (multinode-791000-m02) Calling .DriverName
	I0719 08:05:57.563931    4853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 08:05:57.563942    4853 main.go:141] libmachine: (multinode-791000-m02) Calling .GetSSHHostname
	I0719 08:05:57.564029    4853 main.go:141] libmachine: (multinode-791000-m02) Calling .GetSSHPort
	I0719 08:05:57.564114    4853 main.go:141] libmachine: (multinode-791000-m02) Calling .GetSSHKeyPath
	I0719 08:05:57.564199    4853 main.go:141] libmachine: (multinode-791000-m02) Calling .GetSSHUsername
	I0719 08:05:57.564273    4853 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1032/.minikube/machines/multinode-791000-m02/id_rsa Username:docker}
	I0719 08:05:57.595212    4853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 08:05:57.606493    4853 status.go:257] multinode-791000-m02 status: &{Name:multinode-791000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0719 08:05:57.606509    4853 status.go:255] checking status of multinode-791000-m03 ...
	I0719 08:05:57.606778    4853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:05:57.606810    4853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:05:57.615342    4853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53400
	I0719 08:05:57.615702    4853 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:05:57.616029    4853 main.go:141] libmachine: Using API Version  1
	I0719 08:05:57.616040    4853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:05:57.616259    4853 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:05:57.616362    4853 main.go:141] libmachine: (multinode-791000-m03) Calling .GetState
	I0719 08:05:57.616443    4853 main.go:141] libmachine: (multinode-791000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:05:57.616524    4853 main.go:141] libmachine: (multinode-791000-m03) DBG | hyperkit pid from json: 4644
	I0719 08:05:57.617761    4853 main.go:141] libmachine: (multinode-791000-m03) DBG | hyperkit pid 4644 missing from process table
	I0719 08:05:57.617788    4853 status.go:330] multinode-791000-m03 host status = "Stopped" (err=<nil>)
	I0719 08:05:57.617808    4853 status.go:343] host is not running, skipping remaining checks
	I0719 08:05:57.617815    4853 status.go:257] multinode-791000-m03 status: &{Name:multinode-791000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.85s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-791000 node start m03 -v=7 --alsologtostderr: (36.146093565s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (200.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-791000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-791000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-791000: (18.820385297s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-791000 --wait=true -v=8 --alsologtostderr
E0719 08:08:16.575535    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 08:09:29.207345    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-791000 --wait=true -v=8 --alsologtostderr: (3m1.234454401s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-791000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (200.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-791000 node delete m03: (3.013585113s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-791000 stop: (16.603288451s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-791000 status: exit status 7 (78.513213ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-791000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-791000 status --alsologtostderr: exit status 7 (77.298768ms)

                                                
                                                
-- stdout --
	multinode-791000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-791000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 08:10:14.374635    4980 out.go:291] Setting OutFile to fd 1 ...
	I0719 08:10:14.374935    4980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 08:10:14.374941    4980 out.go:304] Setting ErrFile to fd 2...
	I0719 08:10:14.374945    4980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 08:10:14.375128    4980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1032/.minikube/bin
	I0719 08:10:14.375306    4980 out.go:298] Setting JSON to false
	I0719 08:10:14.375330    4980 mustload.go:65] Loading cluster: multinode-791000
	I0719 08:10:14.375364    4980 notify.go:220] Checking for updates...
	I0719 08:10:14.375659    4980 config.go:182] Loaded profile config "multinode-791000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 08:10:14.375672    4980 status.go:255] checking status of multinode-791000 ...
	I0719 08:10:14.376011    4980 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:10:14.376067    4980 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:10:14.384496    4980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53630
	I0719 08:10:14.384826    4980 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:10:14.385256    4980 main.go:141] libmachine: Using API Version  1
	I0719 08:10:14.385269    4980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:10:14.385494    4980 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:10:14.385615    4980 main.go:141] libmachine: (multinode-791000) Calling .GetState
	I0719 08:10:14.385702    4980 main.go:141] libmachine: (multinode-791000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:10:14.385769    4980 main.go:141] libmachine: (multinode-791000) DBG | hyperkit pid from json: 4911
	I0719 08:10:14.386768    4980 main.go:141] libmachine: (multinode-791000) DBG | hyperkit pid 4911 missing from process table
	I0719 08:10:14.386806    4980 status.go:330] multinode-791000 host status = "Stopped" (err=<nil>)
	I0719 08:10:14.386814    4980 status.go:343] host is not running, skipping remaining checks
	I0719 08:10:14.386820    4980 status.go:257] multinode-791000 status: &{Name:multinode-791000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 08:10:14.386837    4980 status.go:255] checking status of multinode-791000-m02 ...
	I0719 08:10:14.387089    4980 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 08:10:14.387121    4980 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 08:10:14.395367    4980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53632
	I0719 08:10:14.395740    4980 main.go:141] libmachine: () Calling .GetVersion
	I0719 08:10:14.396120    4980 main.go:141] libmachine: Using API Version  1
	I0719 08:10:14.396138    4980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 08:10:14.396371    4980 main.go:141] libmachine: () Calling .GetMachineName
	I0719 08:10:14.396496    4980 main.go:141] libmachine: (multinode-791000-m02) Calling .GetState
	I0719 08:10:14.396594    4980 main.go:141] libmachine: (multinode-791000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 08:10:14.396658    4980 main.go:141] libmachine: (multinode-791000-m02) DBG | hyperkit pid from json: 4922
	I0719 08:10:14.397632    4980 main.go:141] libmachine: (multinode-791000-m02) DBG | hyperkit pid 4922 missing from process table
	I0719 08:10:14.397656    4980 status.go:330] multinode-791000-m02 host status = "Stopped" (err=<nil>)
	I0719 08:10:14.397661    4980 status.go:343] host is not running, skipping remaining checks
	I0719 08:10:14.397667    4980 status.go:257] multinode-791000-m02 status: &{Name:multinode-791000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (139.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-791000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0719 08:11:19.627308    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-791000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (2m18.990989962s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-791000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (139.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-791000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-791000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-791000-m02 --driver=hyperkit : exit status 14 (439.240082ms)

                                                
                                                
-- stdout --
	* [multinode-791000-m02] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-791000-m02' is duplicated with machine name 'multinode-791000-m02' in profile 'multinode-791000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-791000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-791000-m03 --driver=hyperkit : (40.417644335s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-791000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-791000: exit status 80 (269.055303ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-791000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-791000-m03 already exists in multinode-791000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-791000-m03
E0719 08:13:16.568777    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-791000-m03: (3.40235485s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.58s)

                                                
                                    
x
+
TestPreload (162.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-387000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0719 08:14:29.200505    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-387000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m13.829489175s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-387000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-387000 image pull gcr.io/k8s-minikube/busybox: (1.566520706s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-387000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-387000: (8.389234575s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-387000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-387000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m13.005967677s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-387000 image list
helpers_test.go:175: Cleaning up "test-preload-387000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-387000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-387000: (5.258191441s)
--- PASS: TestPreload (162.20s)

                                                
                                    
x
+
TestScheduledStopUnix (223.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-998000 --memory=2048 --driver=hyperkit 
E0719 08:18:16.615367    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-998000 --memory=2048 --driver=hyperkit : (2m32.3137781s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-998000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-998000 -n scheduled-stop-998000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-998000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-998000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-998000 -n scheduled-stop-998000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-998000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-998000 --schedule 15s
E0719 08:19:12.306809    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0719 08:19:29.246783    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-998000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-998000: exit status 7 (71.197238ms)

                                                
                                                
-- stdout --
	scheduled-stop-998000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-998000 -n scheduled-stop-998000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-998000 -n scheduled-stop-998000: exit status 7 (66.970497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-998000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-998000
--- PASS: TestScheduledStopUnix (223.86s)

                                                
                                    
x
+
TestSkaffold (114.93s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe498595863 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe498595863 version: (1.72975319s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-328000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-328000 --memory=2600 --driver=hyperkit : (37.596833492s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe498595863 run --minikube-profile skaffold-328000 --kube-context skaffold-328000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe498595863 run --minikube-profile skaffold-328000 --kube-context skaffold-328000 --status-check=true --port-forward=false --interactive=false: (57.281687725s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-549977f9d6-6z5wd" [db736d18-a772-4886-8c6f-b726b96ea3be] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004056781s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-698969c64f-nfw8h" [8a1b219f-97ce-47c8-8aa9-96c03716c5de] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005194851s
helpers_test.go:175: Cleaning up "skaffold-328000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-328000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-328000: (5.23632124s)
--- PASS: TestSkaffold (114.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (96.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3522620339 start -p running-upgrade-569000 --memory=2200 --vm-driver=hyperkit 
E0719 08:28:16.599133    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3522620339 start -p running-upgrade-569000 --memory=2200 --vm-driver=hyperkit : (54.051789346s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-569000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0719 08:29:17.271133    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:29:29.229337    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-569000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (36.073198486s)
helpers_test.go:175: Cleaning up "running-upgrade-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-569000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-569000: (5.345663463s)
--- PASS: TestRunningBinaryUpgrade (96.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (234.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-626000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-626000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (52.861399794s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-626000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-626000: (2.410690364s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-626000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-626000 status --format={{.Host}}: exit status 7 (66.781237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-626000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-626000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : (2m28.369153177s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-626000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-626000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-626000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (545.256392ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-626000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-626000
	    minikube start -p kubernetes-upgrade-626000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6260002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-626000 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-626000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-626000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : (24.523942814s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-626000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-626000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-626000: (5.24203948s)
--- PASS: TestKubernetesUpgrade (234.07s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.52s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current501276886/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current501276886/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current501276886/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current501276886/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.52s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.24s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2205190050/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2205190050/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2205190050/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2205190050/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (103.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.4227980912 start -p stopped-upgrade-958000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.4227980912 start -p stopped-upgrade-958000 --memory=2200 --vm-driver=hyperkit : (58.838042297s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.4227980912 -p stopped-upgrade-958000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.4227980912 -p stopped-upgrade-958000 stop: (8.242307237s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-958000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0719 08:31:33.416793    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-958000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (36.512979817s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (103.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-958000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-958000: (2.635202791s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.64s)

                                                
                                    
x
+
TestPause/serial/Start (92.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-571000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E0719 08:32:01.108438    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-571000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (1m32.962289297s)
--- PASS: TestPause/serial/Start (92.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-571000 --alsologtostderr -v=1 --driver=hyperkit 
E0719 08:33:16.592058    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-571000 --alsologtostderr -v=1 --driver=hyperkit : (36.615505499s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-273000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-273000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (791.827304ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-273000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1032/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1032/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-273000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-273000 --driver=hyperkit : (41.179466198s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-273000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.35s)

                                                
                                    
x
+
TestPause/serial/Pause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-571000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.53s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-571000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-571000 --output=json --layout=cluster: exit status 2 (160.813287ms)

                                                
                                                
-- stdout --
	{"Name":"pause-571000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-571000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-571000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.51s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.57s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-571000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.57s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-571000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-571000 --alsologtostderr -v=5: (5.245823948s)
--- PASS: TestPause/serial/DeletePaused (5.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (205.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (3m25.73206277s)
--- PASS: TestNetworkPlugins/group/auto/Start (205.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-248000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-248000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-dptww" [449ff886-a1ea-4138-a712-98f064890571] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-dptww" [449ff886-a1ea-4138-a712-98f064890571] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.002416971s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-248000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
E0719 08:38:16.673375    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m13.68844994s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dbc28" [4257a083-914f-450d-bbab-02cec21543ff] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004968649s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-248000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-248000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-f5wrr" [99a02341-6f75-46e6-bce7-3b052211608d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-f5wrr" [99a02341-6f75-46e6-bce7-3b052211608d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004122795s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-248000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m24.11725913s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m3.903219942s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-54rfh" [6cd6e9fe-1cee-4410-8024-de6682072c12] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004331853s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-248000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-248000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9k7g7" [2149f03b-ff47-4ca8-8020-e665b5f63816] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9k7g7" [2149f03b-ff47-4ca8-8020-e665b5f63816] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003893299s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-248000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-248000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-248000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5rkkf" [55de7e4e-7e52-4206-ba25-1781a35eabf8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5rkkf" [55de7e4e-7e52-4206-ba25-1781a35eabf8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003486894s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (57.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (57.079621895s)
--- PASS: TestNetworkPlugins/group/false/Start (57.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-248000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
E0719 08:42:25.077791    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:25.083216    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:25.093459    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:25.114278    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:25.154529    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:25.234686    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:25.395802    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:25.717689    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:26.358936    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:27.639452    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:30.200130    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:42:35.320748    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (52.821704403s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-248000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-248000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hx8vc" [ca05526b-029c-4e4d-8c75-6d7bb7c843c8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hx8vc" [ca05526b-029c-4e4d-8c75-6d7bb7c843c8] Running
E0719 08:42:45.560785    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004605637s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-248000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-248000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-248000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-t8n24" [60b8954d-50a8-44b3-8526-76f7705975e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-t8n24" [60b8954d-50a8-44b3-8526-76f7705975e4] Running
E0719 08:43:06.042510    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00422952s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m1.755699628s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-248000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0719 08:43:47.003365    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:44:08.132283    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:44:08.137707    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:44:08.148287    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:44:08.170523    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:44:08.211070    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:44:08.292034    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:44:08.452161    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:44:08.772714    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m31.74936906s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-j2h6j" [29fa5e9f-e48f-4002-bb35-6868504620d1] Running
E0719 08:44:09.413950    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:44:10.694908    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:44:13.257179    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005275801s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-248000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-248000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rj65z" [745ef296-e525-4de1-a1b0-6ed324d75156] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0719 08:44:18.378242    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-rj65z" [745ef296-e525-4de1-a1b0-6ed324d75156] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004072359s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-248000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (90.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0719 08:44:49.100863    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-248000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (1m30.239589971s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (90.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-248000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-248000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jj7zj" [5ff2e7fc-04a2-46d8-937d-8b1ba080467c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jj7zj" [5ff2e7fc-04a2-46d8-937d-8b1ba080467c] Running
E0719 08:45:08.922365    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.002982275s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-248000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (163.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-276000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0719 08:45:30.060709    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:46:07.104594    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:07.111045    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:07.122468    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:07.143458    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:07.185121    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:07.267371    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:07.429198    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:07.750845    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:08.391313    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:09.672571    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:12.233111    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-276000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m43.33446696s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (163.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-248000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-248000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9lzbn" [e0c2c803-7ea0-4a58-8102-bc13fa6fd9a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0719 08:46:17.354044    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-9lzbn" [e0c2c803-7ea0-4a58-8102-bc13fa6fd9a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004685793s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-248000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-248000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E0719 09:03:16.709641    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-357000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0719 08:46:45.241219    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/custom-flannel-248000/client.crt: no such file or directory
E0719 08:46:48.075640    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:46:51.981095    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:46:55.481987    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/custom-flannel-248000/client.crt: no such file or directory
E0719 08:47:15.962810    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/custom-flannel-248000/client.crt: no such file or directory
E0719 08:47:25.073816    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:47:29.035213    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:47:38.930090    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:38.936336    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:38.947782    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:38.967862    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:39.007944    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:39.088819    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:39.250797    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:39.571182    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:40.211965    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:41.492188    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:44.052993    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:49.173697    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-357000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (1m8.604353484s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-357000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c259488e-4b3a-4fcb-a0d8-1cecc70be90e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0719 08:47:52.761188    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c259488e-4b3a-4fcb-a0d8-1cecc70be90e] Running
E0719 08:47:56.922796    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/custom-flannel-248000/client.crt: no such file or directory
E0719 08:47:58.251835    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:47:58.258319    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:47:58.269342    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:47:58.291510    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:47:58.331734    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:47:58.413560    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:47:58.574906    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:47:58.895661    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:47:59.414937    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:47:59.537052    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004890075s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-357000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-357000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0719 08:48:00.817355    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-357000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-357000 --alsologtostderr -v=3
E0719 08:48:03.377952    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:48:08.498597    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-357000 --alsologtostderr -v=3: (8.443892577s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-357000 -n no-preload-357000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-357000 -n no-preload-357000: exit status 7 (67.265597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-357000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (293.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-357000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-357000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (4m53.772623486s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-357000 -n no-preload-357000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (293.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-276000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ab4b898-0f7a-4ac2-9d2e-50a3e8505a9b] Pending
helpers_test.go:344: "busybox" [2ab4b898-0f7a-4ac2-9d2e-50a3e8505a9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ab4b898-0f7a-4ac2-9d2e-50a3e8505a9b] Running
E0719 08:48:16.664828    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 08:48:18.739977    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:48:19.896169    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003847812s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-276000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-276000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-276000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-276000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-276000 --alsologtostderr -v=3: (8.414694053s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-276000 -n old-k8s-version-276000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-276000 -n old-k8s-version-276000: exit status 7 (68.628854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-276000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (383.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-276000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0719 08:48:39.220875    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:48:50.954264    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:49:00.856637    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:49:08.128495    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:49:09.190132    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:09.195359    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:09.206778    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:09.228218    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:09.270322    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:09.350685    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:09.510879    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:09.831023    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:10.472211    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:11.752339    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:14.313235    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:18.843564    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/custom-flannel-248000/client.crt: no such file or directory
E0719 08:49:19.434306    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:20.181564    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:49:29.295783    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 08:49:29.674320    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:35.819733    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:49:50.154902    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:49:59.809849    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:49:59.815395    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:49:59.825795    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:49:59.846927    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:49:59.887096    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:49:59.969237    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:50:00.130700    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:50:00.451744    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:50:01.092972    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:50:02.373255    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:50:04.934741    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:50:10.055797    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:50:20.295779    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:50:22.776923    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:50:31.114442    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:50:40.775730    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:50:42.100627    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:51:07.101181    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:51:15.240771    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:15.245909    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:15.257123    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:15.277754    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:15.318077    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:15.398791    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:15.559031    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:15.879454    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:16.521733    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:17.802619    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:20.364370    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:21.735323    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:51:25.484808    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:33.483051    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:51:34.792073    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:51:34.991042    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/custom-flannel-248000/client.crt: no such file or directory
E0719 08:51:35.724826    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:51:53.034065    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:51:56.206814    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:52:02.682886    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/custom-flannel-248000/client.crt: no such file or directory
E0719 08:52:25.124329    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:52:32.411999    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 08:52:37.221937    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:52:38.979557    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 08:52:43.709820    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:52:58.303099    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-276000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m23.703265383s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-276000 -n old-k8s-version-276000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (383.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-lzk64" [293fa227-3d16-4f61-90e5-de66476f058d] Running
E0719 08:53:06.669920    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00450486s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-lzk64" [293fa227-3d16-4f61-90e5-de66476f058d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004161263s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-357000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-357000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-357000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-357000 -n no-preload-357000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-357000 -n no-preload-357000: exit status 2 (162.641718ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-357000 -n no-preload-357000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-357000 -n no-preload-357000: exit status 2 (164.465852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-357000 --alsologtostderr -v=1
E0719 08:53:16.714574    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-357000 -n no-preload-357000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-357000 -n no-preload-357000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-677000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3
E0719 08:53:25.995201    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:53:59.141386    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:54:08.177946    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:54:09.240466    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:54:29.346690    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 08:54:36.927031    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-677000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3: (1m30.564984499s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-677000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bf333fc4-f82b-45ca-9cbb-3aab44988bd1] Pending
helpers_test.go:344: "busybox" [bf333fc4-f82b-45ca-9cbb-3aab44988bd1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bf333fc4-f82b-45ca-9cbb-3aab44988bd1] Running
E0719 08:54:59.860973    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.002655828s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-677000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lbmqz" [33da1a7d-0875-4e26-9d3e-99a0e200435d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004382045s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lbmqz" [33da1a7d-0875-4e26-9d3e-99a0e200435d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003600418s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-276000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-677000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-677000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-677000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-677000 --alsologtostderr -v=3: (8.445529194s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-276000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-276000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-276000 -n old-k8s-version-276000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-276000 -n old-k8s-version-276000: exit status 2 (164.439679ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-276000 -n old-k8s-version-276000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-276000 -n old-k8s-version-276000: exit status 2 (162.429067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-276000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-276000 -n old-k8s-version-276000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-276000 -n old-k8s-version-276000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-677000 -n embed-certs-677000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-677000 -n embed-certs-677000: exit status 7 (67.256484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-677000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (311.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-677000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-677000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3: (5m11.072982616s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-677000 -n embed-certs-677000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (311.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (128.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-078000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3
E0719 08:55:27.550101    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
E0719 08:56:07.151717    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 08:56:15.292105    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 08:56:33.535011    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:56:35.043722    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/custom-flannel-248000/client.crt: no such file or directory
E0719 08:56:42.980296    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-078000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3: (2m8.976177992s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (128.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-078000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [94c2f001-aa9b-4161-9d7e-fd1b48e6a076] Pending
helpers_test.go:344: "busybox" [94c2f001-aa9b-4161-9d7e-fd1b48e6a076] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0719 08:57:25.121333    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [94c2f001-aa9b-4161-9d7e-fd1b48e6a076] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005335848s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-078000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-078000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-078000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-078000 --alsologtostderr -v=3
E0719 08:57:38.977205    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-078000 --alsologtostderr -v=3: (8.447706448s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-078000 -n default-k8s-diff-port-078000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-078000 -n default-k8s-diff-port-078000: exit status 7 (65.86144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-078000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (310.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-078000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3
E0719 08:57:52.530835    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:52.536224    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:52.547012    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:52.567517    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:52.607687    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:52.689868    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:52.851179    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:53.171740    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:53.812966    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:55.093146    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:57.653327    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:57:58.299930    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
E0719 08:58:02.773708    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:58:12.973619    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:12.980037    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:12.992018    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:13.013994    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:58:13.014001    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:13.055973    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:13.136812    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:13.297254    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:13.617804    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:14.258038    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:15.538761    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:16.712444    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 08:58:18.099203    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:23.219680    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:33.459760    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:58:33.494561    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:58:48.169759    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 08:58:53.940721    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:59:08.175218    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 08:59:09.238532    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
E0719 08:59:14.455676    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
E0719 08:59:29.343261    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/addons-870000/client.crt: no such file or directory
E0719 08:59:34.901494    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 08:59:36.587966    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 08:59:59.859205    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/bridge-248000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-078000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3: (5m10.478930861s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-078000 -n default-k8s-diff-port-078000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (310.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-t9qzq" [bd51ce9c-f98b-45d1-936c-c45c47537fa9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002183896s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-t9qzq" [bd51ce9c-f98b-45d1-936c-c45c47537fa9] Running
E0719 09:00:31.228745    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004290029s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-677000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-677000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-677000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-677000 -n embed-certs-677000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-677000 -n embed-certs-677000: exit status 2 (155.75688ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-677000 -n embed-certs-677000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-677000 -n embed-certs-677000: exit status 2 (154.327074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-677000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-677000 -n embed-certs-677000
E0719 09:00:36.375066    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-677000 -n embed-certs-677000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (157.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-923000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0719 09:00:56.821208    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 09:01:07.148591    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 09:01:15.289560    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kubenet-248000/client.crt: no such file or directory
E0719 09:01:19.768493    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/functional-638000/client.crt: no such file or directory
E0719 09:01:33.533179    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/skaffold-328000/client.crt: no such file or directory
E0719 09:01:35.039628    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/custom-flannel-248000/client.crt: no such file or directory
E0719 09:02:25.118745    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/auto-248000/client.crt: no such file or directory
E0719 09:02:30.200507    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/calico-248000/client.crt: no such file or directory
E0719 09:02:38.974595    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 09:02:52.527192    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-923000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (2m37.923205893s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (157.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-8tt6b" [3e3fbd55-6864-462c-bb60-ccd93c39156d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-8tt6b" [3e3fbd55-6864-462c-bb60-ccd93c39156d] Running
E0719 09:02:58.092566    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/custom-flannel-248000/client.crt: no such file or directory
E0719 09:02:58.296013    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004835651s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-8tt6b" [3e3fbd55-6864-462c-bb60-ccd93c39156d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005387056s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-078000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-078000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-078000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-078000 -n default-k8s-diff-port-078000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-078000 -n default-k8s-diff-port-078000: exit status 2 (217.735633ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-078000 -n default-k8s-diff-port-078000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-078000 -n default-k8s-diff-port-078000: exit status 2 (189.905545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-078000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-078000 -n default-k8s-diff-port-078000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-078000 -n default-k8s-diff-port-078000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-923000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0719 09:03:20.214137    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/no-preload-357000/client.crt: no such file or directory
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-923000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-923000 --alsologtostderr -v=3: (8.427928095s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-923000 -n newest-cni-923000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-923000 -n newest-cni-923000: exit status 7 (67.632888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-923000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-923000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0719 09:03:40.660986    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/old-k8s-version-276000/client.crt: no such file or directory
E0719 09:04:02.023984    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/false-248000/client.crt: no such file or directory
E0719 09:04:08.173704    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/kindnet-248000/client.crt: no such file or directory
E0719 09:04:09.236415    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/flannel-248000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-923000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (51.015515672s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-923000 -n newest-cni-923000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-923000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-923000 --alsologtostderr -v=1
E0719 09:04:21.351183    1560 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1032/.minikube/profiles/enable-default-cni-248000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-923000 -n newest-cni-923000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-923000 -n newest-cni-923000: exit status 2 (158.791241ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-923000 -n newest-cni-923000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-923000 -n newest-cni-923000: exit status 2 (161.18302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-923000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-923000 -n newest-cni-923000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-923000 -n newest-cni-923000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.82s)

                                                
                                    

Test skip (22/339)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-248000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-248000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-248000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-248000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-248000"

                                                
                                                
----------------------- debugLogs end: cilium-248000 [took: 5.481711506s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-248000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-248000
--- SKIP: TestNetworkPlugins/group/cilium (5.69s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-831000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-831000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard