Test Report: Hyperkit_macOS 19307

                    
                      5a24b9ce483ba531c92412d298617e78cc9898c8:2024-07-19:35418
                    
                

Test fail (2/344)

Order failed test Duration
232 TestMountStart/serial/StartWithMountSecond 76
244 TestMultiNode/serial/RestartKeepsNodes 204.09
x
+
TestMountStart/serial/StartWithMountSecond (76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-110000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-2-110000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 90 (1m15.846610715s)

                                                
                                                
-- stdout --
	* [mount-start-2-110000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-2-110000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 18:53:22 mount-start-2-110000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 18:53:22 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:22.126713422Z" level=info msg="Starting up"
	Jul 19 18:53:22 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:22.127156930Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 18:53:22 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:22.127856258Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.145571790Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162000493Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162064454Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162125447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162161273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162269899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162311850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162457901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162500014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162533530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162563638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162644262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.162820555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.164393982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.164490131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.164631119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.164674100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.164758878Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.164824648Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167112424Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167199534Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167285692Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167326442Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167359338Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167453254Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167681431Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167786812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167826251Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167857177Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167891336Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167926939Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.167956721Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168002520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168041499Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168075335Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168105520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168134053Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168168764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168207345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168247200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168282582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168314489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168344157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168373653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168405367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168435369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168520029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168552356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168581934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168611484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168642604Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168677800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168709785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168739627Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168830159Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168877190Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168907881Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168936501Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168964618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.168993294Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.169024375Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.169198370Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.169320122Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.169382956Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 18:53:22 mount-start-2-110000 dockerd[516]: time="2024-07-19T18:53:22.169419379Z" level=info msg="containerd successfully booted in 0.024536s"
	Jul 19 18:53:23 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:23.169882380Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 18:53:23 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:23.178902480Z" level=info msg="Loading containers: start."
	Jul 19 18:53:23 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:23.263990340Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 18:53:23 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:23.343213987Z" level=info msg="Loading containers: done."
	Jul 19 18:53:23 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:23.353344423Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 18:53:23 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:23.353500887Z" level=info msg="Daemon has completed initialization"
	Jul 19 18:53:23 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:23.378621774Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 18:53:23 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:23.378740276Z" level=info msg="API listen on [::]:2376"
	Jul 19 18:53:23 mount-start-2-110000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 18:53:24 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:24.328659195Z" level=info msg="Processing signal 'terminated'"
	Jul 19 18:53:24 mount-start-2-110000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 18:53:24 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:24.329917538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 18:53:24 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:24.330030298Z" level=info msg="Daemon shutdown complete"
	Jul 19 18:53:24 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:24.330093571Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 18:53:24 mount-start-2-110000 dockerd[509]: time="2024-07-19T18:53:24.330105127Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 18:53:25 mount-start-2-110000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 18:53:25 mount-start-2-110000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 18:53:25 mount-start-2-110000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 18:53:25 mount-start-2-110000 dockerd[911]: time="2024-07-19T18:53:25.365677113Z" level=info msg="Starting up"
	Jul 19 18:54:25 mount-start-2-110000 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 18:54:25 mount-start-2-110000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 18:54:25 mount-start-2-110000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 18:54:25 mount-start-2-110000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-2-110000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-110000 -n mount-start-2-110000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-110000 -n mount-start-2-110000: exit status 6 (152.374182ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 11:54:25.479702    4175 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-110000" does not appear in /Users/jenkins/minikube-integration/19307-1053/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-110000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/StartWithMountSecond (76.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (204.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-871000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-871000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-871000: (18.831981722s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-871000 --wait=true -v=8 --alsologtostderr
E0719 12:02:04.442305    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 12:02:12.162460    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-871000 --wait=true -v=8 --alsologtostderr: exit status 90 (3m1.297348949s)

                                                
                                                
-- stdout --
	* [multinode-871000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-871000" primary control-plane node in "multinode-871000" cluster
	* Restarting existing hyperkit VM for "multinode-871000" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-871000-m02" worker node in "multinode-871000" cluster
	* Restarting existing hyperkit VM for "multinode-871000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.16
	* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	  - env NO_PROXY=192.169.0.16
	* Verifying Kubernetes components...
	
	* Starting "multinode-871000-m03" worker node in "multinode-871000" cluster
	* Restarting existing hyperkit VM for "multinode-871000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.16,192.169.0.18
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:00:32.402048    4831 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:00:32.402301    4831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:00:32.402306    4831 out.go:304] Setting ErrFile to fd 2...
	I0719 12:00:32.402310    4831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:00:32.402455    4831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	I0719 12:00:32.403925    4831 out.go:298] Setting JSON to false
	I0719 12:00:32.426276    4831 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3602,"bootTime":1721412030,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0719 12:00:32.426364    4831 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:00:32.447891    4831 out.go:177] * [multinode-871000] minikube v1.33.1 on Darwin 14.5
	I0719 12:00:32.489466    4831 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:00:32.489530    4831 notify.go:220] Checking for updates...
	I0719 12:00:32.533563    4831 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:00:32.554596    4831 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 12:00:32.575798    4831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:00:32.596829    4831 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	I0719 12:00:32.618587    4831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:00:32.640567    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:00:32.640788    4831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:00:32.641433    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:00:32.641515    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:00:32.651128    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53156
	I0719 12:00:32.651644    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:00:32.652227    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:00:32.652240    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:00:32.652558    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:00:32.652848    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:32.681431    4831 out.go:177] * Using the hyperkit driver based on existing profile
	I0719 12:00:32.723797    4831 start.go:297] selected driver: hyperkit
	I0719 12:00:32.723820    4831 start.go:901] validating driver "hyperkit" against &{Name:multinode-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.19 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:00:32.724058    4831 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:00:32.724240    4831 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:00:32.724440    4831 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19307-1053/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0719 12:00:32.734251    4831 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0719 12:00:32.738039    4831 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:00:32.738061    4831 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0719 12:00:32.741095    4831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:00:32.741160    4831 cni.go:84] Creating CNI manager for ""
	I0719 12:00:32.741169    4831 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 12:00:32.741249    4831 start.go:340] cluster config:
	{Name:multinode-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-871000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.19 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:00:32.741357    4831 iso.go:125] acquiring lock: {Name:mkefd37d87f1d623b7fad18d7afa6e68e29a5c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:00:32.783504    4831 out.go:177] * Starting "multinode-871000" primary control-plane node in "multinode-871000" cluster
	I0719 12:00:32.804769    4831 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:00:32.804839    4831 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 12:00:32.804870    4831 cache.go:56] Caching tarball of preloaded images
	I0719 12:00:32.805070    4831 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 12:00:32.805092    4831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:00:32.805281    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:00:32.806338    4831 start.go:360] acquireMachinesLock for multinode-871000: {Name:mk9f33e92e6d472bd2fb7a1dc1c9d72253ce59c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:00:32.806514    4831 start.go:364] duration metric: took 150.522µs to acquireMachinesLock for "multinode-871000"
	I0719 12:00:32.806547    4831 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:00:32.806566    4831 fix.go:54] fixHost starting: 
	I0719 12:00:32.806933    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:00:32.806964    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:00:32.815725    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53158
	I0719 12:00:32.816080    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:00:32.816412    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:00:32.816423    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:00:32.816631    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:00:32.816759    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:32.816865    4831 main.go:141] libmachine: (multinode-871000) Calling .GetState
	I0719 12:00:32.816949    4831 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:00:32.817026    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid from json: 4202
	I0719 12:00:32.817953    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid 4202 missing from process table
	I0719 12:00:32.817998    4831 fix.go:112] recreateIfNeeded on multinode-871000: state=Stopped err=<nil>
	I0719 12:00:32.818017    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	W0719 12:00:32.818110    4831 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:00:32.860589    4831 out.go:177] * Restarting existing hyperkit VM for "multinode-871000" ...
	I0719 12:00:32.883761    4831 main.go:141] libmachine: (multinode-871000) Calling .Start
	I0719 12:00:32.884214    4831 main.go:141] libmachine: (multinode-871000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/hyperkit.pid
	I0719 12:00:32.884261    4831 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:00:32.885990    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid 4202 missing from process table
	I0719 12:00:32.886013    4831 main.go:141] libmachine: (multinode-871000) DBG | pid 4202 is in state "Stopped"
	I0719 12:00:32.886031    4831 main.go:141] libmachine: (multinode-871000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/hyperkit.pid...
	I0719 12:00:32.886224    4831 main.go:141] libmachine: (multinode-871000) DBG | Using UUID 50732e8d-1439-4d54-9eb1-76002314766d
	I0719 12:00:32.993265    4831 main.go:141] libmachine: (multinode-871000) DBG | Generated MAC f2:4c:c6:88:73:ec
	I0719 12:00:32.993291    4831 main.go:141] libmachine: (multinode-871000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000
	I0719 12:00:32.993436    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"50732e8d-1439-4d54-9eb1-76002314766d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000381500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0719 12:00:32.993474    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"50732e8d-1439-4d54-9eb1-76002314766d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000381500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0719 12:00:32.993514    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "50732e8d-1439-4d54-9eb1-76002314766d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/multinode-871000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/bzimage,/Users/jenkins/minikube-integration/1930
7-1053/.minikube/machines/multinode-871000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"}
	I0719 12:00:32.993552    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 50732e8d-1439-4d54-9eb1-76002314766d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/multinode-871000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/console-ring -f kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/bzimage,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"
	I0719 12:00:32.993570    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0719 12:00:32.995054    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: Pid is 4843
	I0719 12:00:32.995478    4831 main.go:141] libmachine: (multinode-871000) DBG | Attempt 0
	I0719 12:00:32.995491    4831 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:00:32.995589    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid from json: 4843
	I0719 12:00:32.997408    4831 main.go:141] libmachine: (multinode-871000) DBG | Searching for f2:4c:c6:88:73:ec in /var/db/dhcpd_leases ...
	I0719 12:00:32.997496    4831 main.go:141] libmachine: (multinode-871000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0719 12:00:32.997527    4831 main.go:141] libmachine: (multinode-871000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:5e:a3:f5:89:e4:9e ID:1,5e:a3:f5:89:e4:9e Lease:0x669ab7be}
	I0719 12:00:32.997541    4831 main.go:141] libmachine: (multinode-871000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:36:3f:5c:47:18:4c ID:1,36:3f:5c:47:18:4c Lease:0x669c0844}
	I0719 12:00:32.997551    4831 main.go:141] libmachine: (multinode-871000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:41:5c:70:34:46 ID:1,82:41:5c:70:34:46 Lease:0x669c0833}
	I0719 12:00:32.997564    4831 main.go:141] libmachine: (multinode-871000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:4c:c6:88:73:ec ID:1,f2:4c:c6:88:73:ec Lease:0x669c07f3}
	I0719 12:00:32.997575    4831 main.go:141] libmachine: (multinode-871000) DBG | Found match: f2:4c:c6:88:73:ec
	I0719 12:00:32.997597    4831 main.go:141] libmachine: (multinode-871000) DBG | IP: 192.169.0.16
	I0719 12:00:32.997640    4831 main.go:141] libmachine: (multinode-871000) Calling .GetConfigRaw
	I0719 12:00:32.998375    4831 main.go:141] libmachine: (multinode-871000) Calling .GetIP
	I0719 12:00:32.998583    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:00:32.999156    4831 machine.go:94] provisionDockerMachine start ...
	I0719 12:00:32.999170    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:32.999304    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:32.999433    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:32.999560    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:32.999695    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:32.999811    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:32.999943    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:33.000178    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:33.000187    4831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 12:00:33.003121    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0719 12:00:33.056538    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0719 12:00:33.057630    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:00:33.057646    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:00:33.057655    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:00:33.057661    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:00:33.434726    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0719 12:00:33.434742    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0719 12:00:33.549270    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:00:33.549284    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:00:33.549315    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:00:33.549350    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:00:33.550217    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0719 12:00:33.550231    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0719 12:00:38.801284    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0719 12:00:38.801336    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0719 12:00:38.801347    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0719 12:00:38.825961    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0719 12:00:44.069201    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 12:00:44.069216    4831 main.go:141] libmachine: (multinode-871000) Calling .GetMachineName
	I0719 12:00:44.069367    4831 buildroot.go:166] provisioning hostname "multinode-871000"
	I0719 12:00:44.069379    4831 main.go:141] libmachine: (multinode-871000) Calling .GetMachineName
	I0719 12:00:44.069499    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.069604    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.069698    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.069853    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.069950    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.070077    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:44.070222    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:44.070231    4831 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-871000 && echo "multinode-871000" | sudo tee /etc/hostname
	I0719 12:00:44.141472    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-871000
	
	I0719 12:00:44.141490    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.141615    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.141731    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.141817    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.141903    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.142025    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:44.142169    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:44.142180    4831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-871000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-871000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-871000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 12:00:44.211399    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 12:00:44.211422    4831 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19307-1053/.minikube CaCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19307-1053/.minikube}
	I0719 12:00:44.211437    4831 buildroot.go:174] setting up certificates
	I0719 12:00:44.211452    4831 provision.go:84] configureAuth start
	I0719 12:00:44.211466    4831 main.go:141] libmachine: (multinode-871000) Calling .GetMachineName
	I0719 12:00:44.211600    4831 main.go:141] libmachine: (multinode-871000) Calling .GetIP
	I0719 12:00:44.211700    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.211795    4831 provision.go:143] copyHostCerts
	I0719 12:00:44.211827    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:00:44.211901    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem, removing ...
	I0719 12:00:44.211908    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:00:44.212041    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem (1078 bytes)
	I0719 12:00:44.212239    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:00:44.212281    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem, removing ...
	I0719 12:00:44.212286    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:00:44.212365    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem (1123 bytes)
	I0719 12:00:44.212531    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:00:44.212571    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem, removing ...
	I0719 12:00:44.212576    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:00:44.212657    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem (1675 bytes)
	I0719 12:00:44.212798    4831 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem org=jenkins.multinode-871000 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-871000]
	I0719 12:00:44.439259    4831 provision.go:177] copyRemoteCerts
	I0719 12:00:44.439310    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 12:00:44.439346    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.439552    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.439711    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.439856    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.439954    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:00:44.479237    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 12:00:44.479307    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 12:00:44.499560    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 12:00:44.499626    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 12:00:44.520339    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 12:00:44.520397    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 12:00:44.539283    4831 provision.go:87] duration metric: took 327.817751ms to configureAuth
	I0719 12:00:44.539295    4831 buildroot.go:189] setting minikube options for container-runtime
	I0719 12:00:44.539471    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:00:44.539484    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:44.539631    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.539733    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.539816    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.539911    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.539992    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.540102    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:44.540227    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:44.540235    4831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 12:00:44.604508    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 12:00:44.604520    4831 buildroot.go:70] root file system type: tmpfs
	I0719 12:00:44.604598    4831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 12:00:44.604611    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.604749    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.604839    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.604930    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.605024    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.605164    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:44.605321    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:44.605367    4831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 12:00:44.678347    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 12:00:44.678367    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.678528    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.678629    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.678719    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.678800    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.678932    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:44.679072    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:44.679085    4831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 12:00:46.310178    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 12:00:46.310192    4831 machine.go:97] duration metric: took 13.311069223s to provisionDockerMachine
	I0719 12:00:46.310205    4831 start.go:293] postStartSetup for "multinode-871000" (driver="hyperkit")
	I0719 12:00:46.310213    4831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 12:00:46.310226    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:46.310428    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 12:00:46.310443    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:46.310533    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:46.310628    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:46.310726    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:46.310830    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:00:46.347950    4831 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 12:00:46.350937    4831 command_runner.go:130] > NAME=Buildroot
	I0719 12:00:46.350945    4831 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 12:00:46.350949    4831 command_runner.go:130] > ID=buildroot
	I0719 12:00:46.350953    4831 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 12:00:46.350957    4831 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 12:00:46.351059    4831 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 12:00:46.351070    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/addons for local assets ...
	I0719 12:00:46.351163    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/files for local assets ...
	I0719 12:00:46.351361    4831 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> 15922.pem in /etc/ssl/certs
	I0719 12:00:46.351367    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> /etc/ssl/certs/15922.pem
	I0719 12:00:46.351573    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 12:00:46.359513    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem --> /etc/ssl/certs/15922.pem (1708 bytes)
	I0719 12:00:46.378368    4831 start.go:296] duration metric: took 68.150448ms for postStartSetup
	I0719 12:00:46.378390    4831 fix.go:56] duration metric: took 13.571877481s for fixHost
	I0719 12:00:46.378414    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:46.378543    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:46.378630    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:46.378721    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:46.378806    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:46.378925    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:46.379066    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:46.379074    4831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 12:00:46.440347    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415646.621837239
	
	I0719 12:00:46.440359    4831 fix.go:216] guest clock: 1721415646.621837239
	I0719 12:00:46.440364    4831 fix.go:229] Guest: 2024-07-19 12:00:46.621837239 -0700 PDT Remote: 2024-07-19 12:00:46.378392 -0700 PDT m=+14.013022435 (delta=243.445239ms)
	I0719 12:00:46.440383    4831 fix.go:200] guest clock delta is within tolerance: 243.445239ms
	I0719 12:00:46.440386    4831 start.go:83] releasing machines lock for "multinode-871000", held for 13.633904801s
	I0719 12:00:46.440405    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:46.440536    4831 main.go:141] libmachine: (multinode-871000) Calling .GetIP
	I0719 12:00:46.440638    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:46.440941    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:46.441055    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:46.441135    4831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 12:00:46.441166    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:46.441190    4831 ssh_runner.go:195] Run: cat /version.json
	I0719 12:00:46.441201    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:46.441316    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:46.441330    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:46.441411    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:46.441438    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:46.441503    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:46.441561    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:46.441589    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:00:46.441646    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:00:46.475255    4831 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 12:00:46.475472    4831 ssh_runner.go:195] Run: systemctl --version
	I0719 12:00:46.523829    4831 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 12:00:46.524847    4831 command_runner.go:130] > systemd 252 (252)
	I0719 12:00:46.524884    4831 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 12:00:46.525018    4831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 12:00:46.530028    4831 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 12:00:46.530049    4831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 12:00:46.530083    4831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 12:00:46.542690    4831 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 12:00:46.542713    4831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 12:00:46.542722    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:00:46.542816    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:00:46.557179    4831 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 12:00:46.557498    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 12:00:46.565823    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 12:00:46.573971    4831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 12:00:46.574016    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 12:00:46.582254    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:00:46.594975    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 12:00:46.608869    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:00:46.621959    4831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 12:00:46.634841    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 12:00:46.646924    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 12:00:46.656013    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 12:00:46.664861    4831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 12:00:46.672750    4831 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 12:00:46.672905    4831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 12:00:46.680831    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:46.777522    4831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 12:00:46.796367    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:00:46.796441    4831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 12:00:46.816014    4831 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 12:00:46.816025    4831 command_runner.go:130] > [Unit]
	I0719 12:00:46.816032    4831 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 12:00:46.816036    4831 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 12:00:46.816041    4831 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 12:00:46.816045    4831 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 12:00:46.816052    4831 command_runner.go:130] > StartLimitBurst=3
	I0719 12:00:46.816057    4831 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 12:00:46.816063    4831 command_runner.go:130] > [Service]
	I0719 12:00:46.816068    4831 command_runner.go:130] > Type=notify
	I0719 12:00:46.816074    4831 command_runner.go:130] > Restart=on-failure
	I0719 12:00:46.816081    4831 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 12:00:46.816088    4831 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 12:00:46.816099    4831 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 12:00:46.816107    4831 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 12:00:46.816120    4831 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 12:00:46.816126    4831 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 12:00:46.816134    4831 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 12:00:46.816143    4831 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 12:00:46.816150    4831 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 12:00:46.816155    4831 command_runner.go:130] > ExecStart=
	I0719 12:00:46.816166    4831 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0719 12:00:46.816171    4831 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 12:00:46.816178    4831 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 12:00:46.816183    4831 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 12:00:46.816187    4831 command_runner.go:130] > LimitNOFILE=infinity
	I0719 12:00:46.816191    4831 command_runner.go:130] > LimitNPROC=infinity
	I0719 12:00:46.816194    4831 command_runner.go:130] > LimitCORE=infinity
	I0719 12:00:46.816199    4831 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 12:00:46.816204    4831 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 12:00:46.816207    4831 command_runner.go:130] > TasksMax=infinity
	I0719 12:00:46.816210    4831 command_runner.go:130] > TimeoutStartSec=0
	I0719 12:00:46.816215    4831 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 12:00:46.816220    4831 command_runner.go:130] > Delegate=yes
	I0719 12:00:46.816225    4831 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 12:00:46.816228    4831 command_runner.go:130] > KillMode=process
	I0719 12:00:46.816232    4831 command_runner.go:130] > [Install]
	I0719 12:00:46.816241    4831 command_runner.go:130] > WantedBy=multi-user.target
	I0719 12:00:46.816301    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:00:46.828017    4831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 12:00:46.841820    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:00:46.854311    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:00:46.865403    4831 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 12:00:46.885334    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:00:46.896643    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:00:46.911154    4831 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 12:00:46.911543    4831 ssh_runner.go:195] Run: which cri-dockerd
	I0719 12:00:46.914439    4831 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 12:00:46.914592    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 12:00:46.922635    4831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 12:00:46.935842    4831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 12:00:47.032258    4831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 12:00:47.146507    4831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 12:00:47.146582    4831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 12:00:47.160491    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:47.256476    4831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 12:00:49.580336    4831 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323849451s)
	I0719 12:00:49.580400    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 12:00:49.591628    4831 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 12:00:49.604680    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 12:00:49.615365    4831 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 12:00:49.709475    4831 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 12:00:49.817392    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:49.913248    4831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 12:00:49.926239    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 12:00:49.937484    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:50.040074    4831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 12:00:50.095245    4831 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 12:00:50.095324    4831 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 12:00:50.099885    4831 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 12:00:50.099906    4831 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 12:00:50.099912    4831 command_runner.go:130] > Device: 0,22	Inode: 741         Links: 1
	I0719 12:00:50.099917    4831 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0719 12:00:50.099920    4831 command_runner.go:130] > Access: 2024-07-19 19:00:50.234837118 +0000
	I0719 12:00:50.099925    4831 command_runner.go:130] > Modify: 2024-07-19 19:00:50.234837118 +0000
	I0719 12:00:50.099929    4831 command_runner.go:130] > Change: 2024-07-19 19:00:50.236836876 +0000
	I0719 12:00:50.099932    4831 command_runner.go:130] >  Birth: -
	I0719 12:00:50.100165    4831 start.go:563] Will wait 60s for crictl version
	I0719 12:00:50.100214    4831 ssh_runner.go:195] Run: which crictl
	I0719 12:00:50.103172    4831 command_runner.go:130] > /usr/bin/crictl
	I0719 12:00:50.103490    4831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 12:00:50.128144    4831 command_runner.go:130] > Version:  0.1.0
	I0719 12:00:50.128173    4831 command_runner.go:130] > RuntimeName:  docker
	I0719 12:00:50.128231    4831 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 12:00:50.128306    4831 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 12:00:50.129486    4831 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 12:00:50.129557    4831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 12:00:50.146155    4831 command_runner.go:130] > 27.0.3
	I0719 12:00:50.147029    4831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 12:00:50.163310    4831 command_runner.go:130] > 27.0.3
	I0719 12:00:50.209993    4831 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 12:00:50.210023    4831 main.go:141] libmachine: (multinode-871000) Calling .GetIP
	I0719 12:00:50.210225    4831 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0719 12:00:50.213504    4831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 12:00:50.223880    4831 kubeadm.go:883] updating cluster {Name:multinode-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.19 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 12:00:50.223981    4831 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:00:50.224031    4831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 12:00:50.236070    4831 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0719 12:00:50.236083    4831 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 12:00:50.236087    4831 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 12:00:50.236092    4831 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 12:00:50.236095    4831 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 12:00:50.236099    4831 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 12:00:50.236109    4831 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 12:00:50.236113    4831 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 12:00:50.236117    4831 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 12:00:50.236121    4831 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0719 12:00:50.237109    4831 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0719 12:00:50.237118    4831 docker.go:615] Images already preloaded, skipping extraction
	I0719 12:00:50.237183    4831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 12:00:50.249337    4831 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0719 12:00:50.249350    4831 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 12:00:50.249354    4831 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 12:00:50.249358    4831 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 12:00:50.249362    4831 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 12:00:50.249374    4831 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 12:00:50.249379    4831 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 12:00:50.249383    4831 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 12:00:50.249387    4831 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 12:00:50.249391    4831 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0719 12:00:50.250275    4831 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0719 12:00:50.250291    4831 cache_images.go:84] Images are preloaded, skipping loading
	I0719 12:00:50.250300    4831 kubeadm.go:934] updating node { 192.169.0.16 8443 v1.30.3 docker true true} ...
	I0719 12:00:50.250379    4831 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-871000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 12:00:50.250450    4831 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 12:00:50.267413    4831 command_runner.go:130] > cgroupfs
	I0719 12:00:50.268142    4831 cni.go:84] Creating CNI manager for ""
	I0719 12:00:50.268153    4831 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 12:00:50.268162    4831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 12:00:50.268187    4831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.16 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-871000 NodeName:multinode-871000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 12:00:50.268266    4831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-871000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 12:00:50.268328    4831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 12:00:50.276647    4831 command_runner.go:130] > kubeadm
	I0719 12:00:50.276655    4831 command_runner.go:130] > kubectl
	I0719 12:00:50.276658    4831 command_runner.go:130] > kubelet
	I0719 12:00:50.276767    4831 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 12:00:50.276810    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 12:00:50.284703    4831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0719 12:00:50.297809    4831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 12:00:50.311249    4831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0719 12:00:50.325231    4831 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0719 12:00:50.328198    4831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 12:00:50.338427    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:50.433524    4831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:00:50.448598    4831 certs.go:68] Setting up /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000 for IP: 192.169.0.16
	I0719 12:00:50.448610    4831 certs.go:194] generating shared ca certs ...
	I0719 12:00:50.448620    4831 certs.go:226] acquiring lock for ca certs: {Name:mk78732514e475c67b8a22bdfb9da389d614aef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:00:50.448815    4831 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key
	I0719 12:00:50.448890    4831 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key
	I0719 12:00:50.448900    4831 certs.go:256] generating profile certs ...
	I0719 12:00:50.449015    4831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.key
	I0719 12:00:50.449096    4831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/apiserver.key.70f33c4b
	I0719 12:00:50.449168    4831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/proxy-client.key
	I0719 12:00:50.449175    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 12:00:50.449197    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 12:00:50.449217    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 12:00:50.449237    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 12:00:50.449261    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 12:00:50.449294    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 12:00:50.449325    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 12:00:50.449344    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 12:00:50.449453    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem (1338 bytes)
	W0719 12:00:50.449504    4831 certs.go:480] ignoring /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592_empty.pem, impossibly tiny 0 bytes
	I0719 12:00:50.449512    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 12:00:50.449558    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem (1078 bytes)
	I0719 12:00:50.449602    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem (1123 bytes)
	I0719 12:00:50.449649    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem (1675 bytes)
	I0719 12:00:50.449742    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem (1708 bytes)
	I0719 12:00:50.449787    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:00:50.449808    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem -> /usr/share/ca-certificates/1592.pem
	I0719 12:00:50.449826    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> /usr/share/ca-certificates/15922.pem
	I0719 12:00:50.450284    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 12:00:50.486710    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 12:00:50.510026    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 12:00:50.533733    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 12:00:50.555585    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 12:00:50.581721    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 12:00:50.601477    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 12:00:50.621221    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 12:00:50.641584    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 12:00:50.661200    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem --> /usr/share/ca-certificates/1592.pem (1338 bytes)
	I0719 12:00:50.681320    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem --> /usr/share/ca-certificates/15922.pem (1708 bytes)
	I0719 12:00:50.701028    4831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 12:00:50.714328    4831 ssh_runner.go:195] Run: openssl version
	I0719 12:00:50.718364    4831 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 12:00:50.718501    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 12:00:50.726857    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:00:50.730156    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:00:50.730260    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:00:50.730295    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:00:50.734347    4831 command_runner.go:130] > b5213941
	I0719 12:00:50.734466    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 12:00:50.742662    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1592.pem && ln -fs /usr/share/ca-certificates/1592.pem /etc/ssl/certs/1592.pem"
	I0719 12:00:50.750882    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1592.pem
	I0719 12:00:50.754122    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 18:22 /usr/share/ca-certificates/1592.pem
	I0719 12:00:50.754254    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:22 /usr/share/ca-certificates/1592.pem
	I0719 12:00:50.754291    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1592.pem
	I0719 12:00:50.758486    4831 command_runner.go:130] > 51391683
	I0719 12:00:50.758538    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1592.pem /etc/ssl/certs/51391683.0"
	I0719 12:00:50.766824    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15922.pem && ln -fs /usr/share/ca-certificates/15922.pem /etc/ssl/certs/15922.pem"
	I0719 12:00:50.775119    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15922.pem
	I0719 12:00:50.778582    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 18:22 /usr/share/ca-certificates/15922.pem
	I0719 12:00:50.778593    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:22 /usr/share/ca-certificates/15922.pem
	I0719 12:00:50.778630    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15922.pem
	I0719 12:00:50.782894    4831 command_runner.go:130] > 3ec20f2e
	I0719 12:00:50.783005    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15922.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 12:00:50.791555    4831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 12:00:50.795062    4831 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 12:00:50.795072    4831 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 12:00:50.795077    4831 command_runner.go:130] > Device: 253,1	Inode: 531528      Links: 1
	I0719 12:00:50.795082    4831 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 12:00:50.795090    4831 command_runner.go:130] > Access: 2024-07-19 18:54:57.287531357 +0000
	I0719 12:00:50.795095    4831 command_runner.go:130] > Modify: 2024-07-19 18:54:57.287531357 +0000
	I0719 12:00:50.795106    4831 command_runner.go:130] > Change: 2024-07-19 18:54:57.287531357 +0000
	I0719 12:00:50.795111    4831 command_runner.go:130] >  Birth: 2024-07-19 18:54:57.287531357 +0000
	I0719 12:00:50.795154    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 12:00:50.799586    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.799648    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 12:00:50.804014    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.804063    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 12:00:50.808385    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.808509    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 12:00:50.812809    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.812882    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 12:00:50.817105    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.817155    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 12:00:50.821468    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.821558    4831 kubeadm.go:392] StartCluster: {Name:multinode-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.19 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:00:50.821673    4831 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 12:00:50.833650    4831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 12:00:50.841330    4831 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0719 12:00:50.841339    4831 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0719 12:00:50.841344    4831 command_runner.go:130] > /var/lib/minikube/etcd:
	I0719 12:00:50.841363    4831 command_runner.go:130] > member
	I0719 12:00:50.841375    4831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 12:00:50.841386    4831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 12:00:50.841422    4831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 12:00:50.848761    4831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 12:00:50.849095    4831 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-871000" does not appear in /Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:00:50.849177    4831 kubeconfig.go:62] /Users/jenkins/minikube-integration/19307-1053/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-871000" cluster setting kubeconfig missing "multinode-871000" context setting]
	I0719 12:00:50.849405    4831 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1053/kubeconfig: {Name:mk7cfae7eb77889432abd85178928820b2e794ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:00:50.850051    4831 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:00:50.850266    4831 kapi.go:59] client config for multinode-871000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xebf8ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 12:00:50.850580    4831 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 12:00:50.850753    4831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 12:00:50.857853    4831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.16
	I0719 12:00:50.857870    4831 kubeadm.go:1160] stopping kube-system containers ...
	I0719 12:00:50.857927    4831 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 12:00:50.871812    4831 command_runner.go:130] > 5a07c503ef10
	I0719 12:00:50.871825    4831 command_runner.go:130] > 1a451af36360
	I0719 12:00:50.871829    4831 command_runner.go:130] > 6ddb80b3c9e9
	I0719 12:00:50.871834    4831 command_runner.go:130] > c0dd65646579
	I0719 12:00:50.871851    4831 command_runner.go:130] > 9fb6361ebde6
	I0719 12:00:50.871855    4831 command_runner.go:130] > a2327b8c83c0
	I0719 12:00:50.871858    4831 command_runner.go:130] > 492c042de032
	I0719 12:00:50.871861    4831 command_runner.go:130] > 587cdaf6e20c
	I0719 12:00:50.871865    4831 command_runner.go:130] > a094a5e71d55
	I0719 12:00:50.871868    4831 command_runner.go:130] > a69e88441e03
	I0719 12:00:50.871874    4831 command_runner.go:130] > e5a9045d5578
	I0719 12:00:50.871878    4831 command_runner.go:130] > 72d515f79956
	I0719 12:00:50.871881    4831 command_runner.go:130] > ae60ee8266a7
	I0719 12:00:50.871884    4831 command_runner.go:130] > ce0d6620b5f9
	I0719 12:00:50.871891    4831 command_runner.go:130] > 2fb0e3bd3145
	I0719 12:00:50.871895    4831 command_runner.go:130] > 48bd43fcf8d2
	I0719 12:00:50.872623    4831 docker.go:483] Stopping containers: [5a07c503ef10 1a451af36360 6ddb80b3c9e9 c0dd65646579 9fb6361ebde6 a2327b8c83c0 492c042de032 587cdaf6e20c a094a5e71d55 a69e88441e03 e5a9045d5578 72d515f79956 ae60ee8266a7 ce0d6620b5f9 2fb0e3bd3145 48bd43fcf8d2]
	I0719 12:00:50.872690    4831 ssh_runner.go:195] Run: docker stop 5a07c503ef10 1a451af36360 6ddb80b3c9e9 c0dd65646579 9fb6361ebde6 a2327b8c83c0 492c042de032 587cdaf6e20c a094a5e71d55 a69e88441e03 e5a9045d5578 72d515f79956 ae60ee8266a7 ce0d6620b5f9 2fb0e3bd3145 48bd43fcf8d2
	I0719 12:00:50.884270    4831 command_runner.go:130] > 5a07c503ef10
	I0719 12:00:50.885737    4831 command_runner.go:130] > 1a451af36360
	I0719 12:00:50.885748    4831 command_runner.go:130] > 6ddb80b3c9e9
	I0719 12:00:50.885752    4831 command_runner.go:130] > c0dd65646579
	I0719 12:00:50.885756    4831 command_runner.go:130] > 9fb6361ebde6
	I0719 12:00:50.885759    4831 command_runner.go:130] > a2327b8c83c0
	I0719 12:00:50.885764    4831 command_runner.go:130] > 492c042de032
	I0719 12:00:50.886041    4831 command_runner.go:130] > 587cdaf6e20c
	I0719 12:00:50.886104    4831 command_runner.go:130] > a094a5e71d55
	I0719 12:00:50.886154    4831 command_runner.go:130] > a69e88441e03
	I0719 12:00:50.886163    4831 command_runner.go:130] > e5a9045d5578
	I0719 12:00:50.886167    4831 command_runner.go:130] > 72d515f79956
	I0719 12:00:50.886170    4831 command_runner.go:130] > ae60ee8266a7
	I0719 12:00:50.886458    4831 command_runner.go:130] > ce0d6620b5f9
	I0719 12:00:50.886466    4831 command_runner.go:130] > 2fb0e3bd3145
	I0719 12:00:50.886471    4831 command_runner.go:130] > 48bd43fcf8d2
	I0719 12:00:50.887418    4831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 12:00:50.899484    4831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 12:00:50.906848    4831 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0719 12:00:50.906859    4831 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0719 12:00:50.906864    4831 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0719 12:00:50.906870    4831 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 12:00:50.907008    4831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 12:00:50.907023    4831 kubeadm.go:157] found existing configuration files:
	
	I0719 12:00:50.907067    4831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 12:00:50.914102    4831 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 12:00:50.914121    4831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 12:00:50.914163    4831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 12:00:50.921369    4831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 12:00:50.928626    4831 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 12:00:50.928646    4831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 12:00:50.928691    4831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 12:00:50.936009    4831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 12:00:50.942964    4831 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 12:00:50.942985    4831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 12:00:50.943022    4831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 12:00:50.950263    4831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 12:00:50.957315    4831 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 12:00:50.957328    4831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 12:00:50.957364    4831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 12:00:50.964862    4831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 12:00:50.972390    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:51.035216    4831 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 12:00:51.035258    4831 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0719 12:00:51.035483    4831 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0719 12:00:51.035573    4831 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 12:00:51.035813    4831 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0719 12:00:51.035962    4831 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0719 12:00:51.036249    4831 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0719 12:00:51.036392    4831 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0719 12:00:51.036567    4831 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0719 12:00:51.036704    4831 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 12:00:51.036845    4831 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 12:00:51.037805    4831 command_runner.go:130] > [certs] Using the existing "sa" key
	I0719 12:00:51.037917    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:51.076683    4831 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 12:00:51.282204    4831 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 12:00:51.377771    4831 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 12:00:51.638949    4831 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 12:00:51.795924    4831 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 12:00:51.912126    4831 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 12:00:51.913978    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:51.962857    4831 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 12:00:51.964163    4831 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 12:00:51.964173    4831 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0719 12:00:52.077102    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:52.122974    4831 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 12:00:52.122990    4831 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 12:00:52.129548    4831 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 12:00:52.132019    4831 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 12:00:52.136000    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:52.209857    4831 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 12:00:52.217061    4831 api_server.go:52] waiting for apiserver process to appear ...
	I0719 12:00:52.217135    4831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:00:52.717207    4831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:00:53.217616    4831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:00:53.230644    4831 command_runner.go:130] > 1608
	I0719 12:00:53.230846    4831 api_server.go:72] duration metric: took 1.013796396s to wait for apiserver process to appear ...
	I0719 12:00:53.230858    4831 api_server.go:88] waiting for apiserver healthz status ...
	I0719 12:00:53.230876    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:00:55.238758    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 12:00:55.238773    4831 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 12:00:55.238784    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:00:55.279295    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 12:00:55.279319    4831 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 12:00:55.730933    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:00:55.735667    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 12:00:55.735678    4831 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 12:00:56.231298    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:00:56.235031    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 12:00:56.235046    4831 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 12:00:56.732459    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:00:56.736437    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0719 12:00:56.736498    4831 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0719 12:00:56.736504    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:56.736512    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:56.736517    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:56.741567    4831 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 12:00:56.741579    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:56.741585    4831 round_trippers.go:580]     Audit-Id: 775c2944-6cec-4689-817e-4a722972a289
	I0719 12:00:56.741588    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:56.741591    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:56.741594    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:56.741597    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:56.741600    4831 round_trippers.go:580]     Content-Length: 263
	I0719 12:00:56.741603    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:56 GMT
	I0719 12:00:56.741623    4831 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0719 12:00:56.741668    4831 api_server.go:141] control plane version: v1.30.3
	I0719 12:00:56.741679    4831 api_server.go:131] duration metric: took 3.510827315s to wait for apiserver health ...
	I0719 12:00:56.741684    4831 cni.go:84] Creating CNI manager for ""
	I0719 12:00:56.741688    4831 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 12:00:56.781141    4831 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 12:00:56.817916    4831 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 12:00:56.823787    4831 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0719 12:00:56.823802    4831 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0719 12:00:56.823807    4831 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0719 12:00:56.823812    4831 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 12:00:56.823817    4831 command_runner.go:130] > Access: 2024-07-19 19:00:42.772281672 +0000
	I0719 12:00:56.823821    4831 command_runner.go:130] > Modify: 2024-07-18 23:04:21.000000000 +0000
	I0719 12:00:56.823826    4831 command_runner.go:130] > Change: 2024-07-19 19:00:40.582734066 +0000
	I0719 12:00:56.823829    4831 command_runner.go:130] >  Birth: -
	I0719 12:00:56.823866    4831 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 12:00:56.823872    4831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 12:00:56.857940    4831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 12:00:57.254573    4831 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0719 12:00:57.277003    4831 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0719 12:00:57.402285    4831 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0719 12:00:57.454938    4831 command_runner.go:130] > daemonset.apps/kindnet configured
	I0719 12:00:57.456362    4831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 12:00:57.456442    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:00:57.456452    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.456458    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.456461    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.459746    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:00:57.459754    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.459759    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.459763    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.459765    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.459768    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.459770    4831 round_trippers.go:580]     Audit-Id: 1e339931-92ed-4f97-b0ab-26c9e8a733e5
	I0719 12:00:57.459773    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.460649    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"979"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87012 chars]
	I0719 12:00:57.463655    4831 system_pods.go:59] 12 kube-system pods found
	I0719 12:00:57.463673    4831 system_pods.go:61] "coredns-7db6d8ff4d-85r26" [c7d62ec5-693b-46ab-9437-86aef8b469e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 12:00:57.463679    4831 system_pods.go:61] "etcd-multinode-871000" [8818ed52-4b2d-4629-af02-b835e3cfa034] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 12:00:57.463683    4831 system_pods.go:61] "kindnet-4stbd" [58fb2d63-07bb-4a27-87c5-4e259083f5be] Running
	I0719 12:00:57.463687    4831 system_pods.go:61] "kindnet-897rz" [a3c96d7b-9aa1-49e1-9fa6-8aad9551be4f] Running
	I0719 12:00:57.463690    4831 system_pods.go:61] "kindnet-hht5h" [f1a7b402-0cf3-469c-8124-6b53aa34f4c7] Running
	I0719 12:00:57.463694    4831 system_pods.go:61] "kube-apiserver-multinode-871000" [9f3fdf92-3cbd-411c-802e-cbbbe1b60d68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 12:00:57.463698    4831 system_pods.go:61] "kube-controller-manager-multinode-871000" [74e143fb-26b8-4d1d-b07a-f1b2c590133f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 12:00:57.463706    4831 system_pods.go:61] "kube-proxy-86ssb" [37609942-98d8-4c6b-b339-53bf3a901e3f] Running
	I0719 12:00:57.463710    4831 system_pods.go:61] "kube-proxy-89hm2" [77b4b485-53f0-4480-8b62-a1df26f037b8] Running
	I0719 12:00:57.463713    4831 system_pods.go:61] "kube-proxy-t9bqq" [5ef191fc-6e2e-486c-b825-76c6e0d95416] Running
	I0719 12:00:57.463720    4831 system_pods.go:61] "kube-scheduler-multinode-871000" [0d73182a-0458-470e-ac06-ccde27fa113a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 12:00:57.463724    4831 system_pods.go:61] "storage-provisioner" [ccd0aaec-abf0-4aec-9ebf-14f619510aeb] Running
	I0719 12:00:57.463729    4831 system_pods.go:74] duration metric: took 7.359738ms to wait for pod list to return data ...
	I0719 12:00:57.463736    4831 node_conditions.go:102] verifying NodePressure condition ...
	I0719 12:00:57.463768    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0719 12:00:57.463773    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.463779    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.463783    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.465729    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:00:57.465743    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.465770    4831 round_trippers.go:580]     Audit-Id: bbb46d1f-7fb5-4c51-a18b-f479c702e9c5
	I0719 12:00:57.465796    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.465806    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.465811    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.465814    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.465816    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.465935    4831 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"979"},"items":[{"metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14802 chars]
	I0719 12:00:57.466445    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:00:57.466457    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:00:57.466466    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:00:57.466470    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:00:57.466474    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:00:57.466476    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:00:57.466482    4831 node_conditions.go:105] duration metric: took 2.740642ms to run NodePressure ...
	I0719 12:00:57.466491    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:57.559189    4831 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0719 12:00:57.714400    4831 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0719 12:00:57.715373    4831 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 12:00:57.715432    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0719 12:00:57.715437    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.715443    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.715446    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.717456    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:57.717464    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.717469    4831 round_trippers.go:580]     Audit-Id: f420da95-9f09-4d87-b8c4-3b267b4d6865
	I0719 12:00:57.717472    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.717474    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.717477    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.717480    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.717494    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.718031    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"982"},"items":[{"metadata":{"name":"etcd-multinode-871000","namespace":"kube-system","uid":"8818ed52-4b2d-4629-af02-b835e3cfa034","resourceVersion":"952","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.mirror":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.seen":"2024-07-19T18:55:05.740545259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30912 chars]
	I0719 12:00:57.718730    4831 kubeadm.go:739] kubelet initialised
	I0719 12:00:57.718739    4831 kubeadm.go:740] duration metric: took 3.356875ms waiting for restarted kubelet to initialise ...
	I0719 12:00:57.718747    4831 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:00:57.718778    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:00:57.718783    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.718788    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.718791    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.721109    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:57.721118    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.721123    4831 round_trippers.go:580]     Audit-Id: 7c174756-6655-4d34-8f82-e9921bf5bed0
	I0719 12:00:57.721127    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.721132    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.721136    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.721140    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.721144    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.721839    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"982"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87012 chars]
	I0719 12:00:57.723644    4831 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:57.723690    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:00:57.723696    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.723702    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.723706    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.724990    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:00:57.724998    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.725004    4831 round_trippers.go:580]     Audit-Id: 1c96e36b-039e-436f-ac73-b69e67d82f1f
	I0719 12:00:57.725010    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.725014    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.725019    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.725022    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.725026    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.725179    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:00:57.725410    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:57.725417    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.725422    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.725427    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.726851    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:00:57.726861    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.726867    4831 round_trippers.go:580]     Audit-Id: 2628c7a8-ce9d-4c5b-b9d9-1338663469ee
	I0719 12:00:57.726872    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.726875    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.726879    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.726882    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.726884    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.726971    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:57.727158    4831 pod_ready.go:97] node "multinode-871000" hosting pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.727168    4831 pod_ready.go:81] duration metric: took 3.515447ms for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:57.727186    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.727195    4831 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:57.727223    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-871000
	I0719 12:00:57.727228    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.727233    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.727237    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.728361    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:00:57.728369    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.728374    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.728381    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.728386    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.728390    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.728394    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.728399    4831 round_trippers.go:580]     Audit-Id: d9c38a9e-6095-4979-a34c-4a3222140fc0
	I0719 12:00:57.728543    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-871000","namespace":"kube-system","uid":"8818ed52-4b2d-4629-af02-b835e3cfa034","resourceVersion":"952","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.mirror":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.seen":"2024-07-19T18:55:05.740545259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0719 12:00:57.728746    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:57.728753    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.728759    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.728762    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.729634    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:00:57.729643    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.729651    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.729657    4831 round_trippers.go:580]     Audit-Id: f42a4100-c3e2-41ff-aeee-49c731be4038
	I0719 12:00:57.729660    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.729664    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.729669    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.729673    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.729840    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:57.730004    4831 pod_ready.go:97] node "multinode-871000" hosting pod "etcd-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.730015    4831 pod_ready.go:81] duration metric: took 2.812926ms for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:57.730020    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "etcd-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.730029    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:57.730063    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-871000
	I0719 12:00:57.730068    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.730073    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.730078    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.730934    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:00:57.730941    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.730945    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.730951    4831 round_trippers.go:580]     Audit-Id: 6efa9f85-27dd-430e-97e8-fb170a086f2f
	I0719 12:00:57.730955    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.730960    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.730965    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.730968    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.731160    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-871000","namespace":"kube-system","uid":"9f3fdf92-3cbd-411c-802e-cbbbe1b60d68","resourceVersion":"953","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.mirror":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548209Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0719 12:00:57.731378    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:57.731384    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.731389    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.731392    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.732315    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:00:57.732323    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.732328    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.732331    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.732334    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.732336    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.732339    4831 round_trippers.go:580]     Audit-Id: 54fc7b1c-f6d5-4cd2-a2f9-7fb73f2ffe73
	I0719 12:00:57.732343    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.732413    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:57.732579    4831 pod_ready.go:97] node "multinode-871000" hosting pod "kube-apiserver-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.732588    4831 pod_ready.go:81] duration metric: took 2.55315ms for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:57.732593    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "kube-apiserver-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.732598    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:57.732625    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-871000
	I0719 12:00:57.732630    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.732635    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.732640    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.733553    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:00:57.733560    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.733565    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.733569    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.733571    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.733575    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.733578    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.733580    4831 round_trippers.go:580]     Audit-Id: b13079da-9dc2-4160-a834-34a01e90bb5f
	I0719 12:00:57.733652    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-871000","namespace":"kube-system","uid":"74e143fb-26b8-4d1d-b07a-f1b2c590133f","resourceVersion":"950","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.mirror":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548943Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0719 12:00:57.856725    4831 request.go:629] Waited for 122.781464ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:57.856773    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:57.856784    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.856795    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.856804    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.859847    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:00:57.859862    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.859869    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.859874    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:58 GMT
	I0719 12:00:57.859879    4831 round_trippers.go:580]     Audit-Id: 2cc1c1bd-ec94-4674-b418-6bc8427a19bb
	I0719 12:00:57.859883    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.859887    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.859890    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.860050    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:57.860338    4831 pod_ready.go:97] node "multinode-871000" hosting pod "kube-controller-manager-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.860353    4831 pod_ready.go:81] duration metric: took 127.748716ms for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:57.860361    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "kube-controller-manager-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.860379    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:58.057325    4831 request.go:629] Waited for 196.89606ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-86ssb
	I0719 12:00:58.057373    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-86ssb
	I0719 12:00:58.057380    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:58.057390    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:58.057398    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:58.059950    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:58.059960    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:58.059965    4831 round_trippers.go:580]     Audit-Id: f10eb547-809f-4c54-a6ec-b2288b02ab01
	I0719 12:00:58.059968    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:58.059971    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:58.059974    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:58.059976    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:58.059979    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:58 GMT
	I0719 12:00:58.060113    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-86ssb","generateName":"kube-proxy-","namespace":"kube-system","uid":"37609942-98d8-4c6b-b339-53bf3a901e3f","resourceVersion":"862","creationTimestamp":"2024-07-19T18:57:03Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0719 12:00:58.257183    4831 request.go:629] Waited for 196.701375ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m03
	I0719 12:00:58.257308    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m03
	I0719 12:00:58.257320    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:58.257331    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:58.257337    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:58.259593    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:58.259606    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:58.259612    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:58.259617    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:58 GMT
	I0719 12:00:58.259620    4831 round_trippers.go:580]     Audit-Id: 4dc0fb4d-0db3-4b8d-a313-9758d1995d8b
	I0719 12:00:58.259634    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:58.259638    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:58.259643    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:58.259946    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m03","uid":"4745805a-e01a-4411-b942-abcd092662c6","resourceVersion":"889","creationTimestamp":"2024-07-19T18:59:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T11_59_53_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:59:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3641 chars]
	I0719 12:00:58.260173    4831 pod_ready.go:92] pod "kube-proxy-86ssb" in "kube-system" namespace has status "Ready":"True"
	I0719 12:00:58.260185    4831 pod_ready.go:81] duration metric: took 399.799711ms for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:58.260194    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:58.458045    4831 request.go:629] Waited for 197.804124ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:00:58.458170    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:00:58.458179    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:58.458191    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:58.458197    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:58.460538    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:58.460555    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:58.460564    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:58.460572    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:58.460578    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:58.460587    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:58 GMT
	I0719 12:00:58.460593    4831 round_trippers.go:580]     Audit-Id: 4c96c3a8-6e4e-4dba-8774-2c436d82589a
	I0719 12:00:58.460598    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:58.460767    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-89hm2","generateName":"kube-proxy-","namespace":"kube-system","uid":"77b4b485-53f0-4480-8b62-a1df26f037b8","resourceVersion":"979","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0719 12:00:58.658537    4831 request.go:629] Waited for 197.37511ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:58.658716    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:58.658727    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:58.658738    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:58.658744    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:58.661417    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:58.661434    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:58.661441    4831 round_trippers.go:580]     Audit-Id: 7126b032-d2be-4749-a3a5-c0204a3449bc
	I0719 12:00:58.661446    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:58.661466    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:58.661478    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:58.661482    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:58.661491    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:58 GMT
	I0719 12:00:58.661584    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:58.661831    4831 pod_ready.go:97] node "multinode-871000" hosting pod "kube-proxy-89hm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:58.661846    4831 pod_ready.go:81] duration metric: took 401.647379ms for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:58.661856    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "kube-proxy-89hm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:58.661872    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:58.856578    4831 request.go:629] Waited for 194.656885ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:00:58.856687    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:00:58.856697    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:58.856709    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:58.856717    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:58.859522    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:58.859539    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:58.859546    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:58.859552    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:58.859557    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:58.859561    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:58.859564    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:59 GMT
	I0719 12:00:58.859568    4831 round_trippers.go:580]     Audit-Id: d6a6906a-da0f-40ca-81be-6c8c66da5cb5
	I0719 12:00:58.859682    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t9bqq","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ef191fc-6e2e-486c-b825-76c6e0d95416","resourceVersion":"523","creationTimestamp":"2024-07-19T18:56:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:56:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0719 12:00:59.058021    4831 request.go:629] Waited for 197.993839ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:00:59.058176    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:00:59.058187    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:59.058198    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:59.058206    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:59.060916    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:59.060937    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:59.060948    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:59.060954    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:59.060958    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:59 GMT
	I0719 12:00:59.060965    4831 round_trippers.go:580]     Audit-Id: b3383d03-869e-4dc0-865c-296bb6ac6bba
	I0719 12:00:59.060970    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:59.060976    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:59.061476    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"e0450b58-f42e-4eee-a22b-05f89b4b721d","resourceVersion":"589","creationTimestamp":"2024-07-19T18:56:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T11_56_14_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:56:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0719 12:00:59.061730    4831 pod_ready.go:92] pod "kube-proxy-t9bqq" in "kube-system" namespace has status "Ready":"True"
	I0719 12:00:59.061742    4831 pod_ready.go:81] duration metric: took 399.862582ms for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:59.061752    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:59.258570    4831 request.go:629] Waited for 196.730856ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:00:59.258699    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:00:59.258709    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:59.258720    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:59.258725    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:59.261370    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:59.261382    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:59.261389    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:59 GMT
	I0719 12:00:59.261418    4831 round_trippers.go:580]     Audit-Id: 23659b5b-7026-4753-9b67-8bd41b92b47d
	I0719 12:00:59.261466    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:59.261482    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:59.261488    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:59.261494    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:59.261864    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-871000","namespace":"kube-system","uid":"0d73182a-0458-470e-ac06-ccde27fa113a","resourceVersion":"948","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.mirror":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.seen":"2024-07-19T18:55:00.040869314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0719 12:00:59.456844    4831 request.go:629] Waited for 194.664848ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:59.456973    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:59.456982    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:59.456995    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:59.457004    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:59.459649    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:59.459664    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:59.459671    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:59.459675    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:59 GMT
	I0719 12:00:59.459679    4831 round_trippers.go:580]     Audit-Id: 5c5adbfc-9e7b-4172-b121-c2d1431e9d6d
	I0719 12:00:59.459682    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:59.459685    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:59.459688    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:59.459804    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:59.460084    4831 pod_ready.go:97] node "multinode-871000" hosting pod "kube-scheduler-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:59.460102    4831 pod_ready.go:81] duration metric: took 398.345213ms for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:59.460111    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "kube-scheduler-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:59.460120    4831 pod_ready.go:38] duration metric: took 1.74137125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:00:59.460137    4831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 12:00:59.470055    4831 command_runner.go:130] > -16
	I0719 12:00:59.470100    4831 ops.go:34] apiserver oom_adj: -16
	I0719 12:00:59.470108    4831 kubeadm.go:597] duration metric: took 8.628745324s to restartPrimaryControlPlane
	I0719 12:00:59.470115    4831 kubeadm.go:394] duration metric: took 8.648588625s to StartCluster
	I0719 12:00:59.470125    4831 settings.go:142] acquiring lock: {Name:mk32b18012e36d8300f16bafebdd450435b306a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:00:59.470229    4831 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:00:59.470588    4831 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1053/kubeconfig: {Name:mk7cfae7eb77889432abd85178928820b2e794ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:00:59.470958    4831 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:00:59.470985    4831 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 12:00:59.471122    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:00:59.494234    4831 out.go:177] * Verifying Kubernetes components...
	I0719 12:00:59.537235    4831 out.go:177] * Enabled addons: 
	I0719 12:00:59.558133    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:59.579095    4831 addons.go:510] duration metric: took 108.11473ms for enable addons: enabled=[]
	I0719 12:00:59.695256    4831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:00:59.706084    4831 node_ready.go:35] waiting up to 6m0s for node "multinode-871000" to be "Ready" ...
	I0719 12:00:59.706137    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:59.706143    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:59.706149    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:59.706154    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:59.707867    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:00:59.707877    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:59.707886    4831 round_trippers.go:580]     Audit-Id: 66c66110-fcff-40f5-8e0d-e068bc010762
	I0719 12:00:59.707889    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:59.707894    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:59.707896    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:59.707898    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:59.707901    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:59 GMT
	I0719 12:00:59.708039    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:00.206691    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:00.206718    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:00.206730    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:00.206736    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:00.208920    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:00.208933    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:00.208941    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:00 GMT
	I0719 12:01:00.208946    4831 round_trippers.go:580]     Audit-Id: 0a129778-1996-40bb-a48c-fa0ac4c803b1
	I0719 12:01:00.208949    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:00.208953    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:00.208956    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:00.208959    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:00.209110    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:00.706969    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:00.706993    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:00.707005    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:00.707011    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:00.709702    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:00.709718    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:00.709725    4831 round_trippers.go:580]     Audit-Id: 236326f2-e9d0-4a0e-b41f-807eb6b67134
	I0719 12:01:00.709729    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:00.709773    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:00.709781    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:00.709786    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:00.709790    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:00 GMT
	I0719 12:01:00.710141    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:01.206241    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:01.206252    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:01.206258    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:01.206261    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:01.208805    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:01.208818    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:01.208826    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:01.208832    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:01.208836    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:01 GMT
	I0719 12:01:01.208840    4831 round_trippers.go:580]     Audit-Id: 432acf5e-037c-436b-b152-26648b7bb65c
	I0719 12:01:01.208844    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:01.208847    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:01.209124    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:01.706530    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:01.706547    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:01.706556    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:01.706561    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:01.708554    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:01.708566    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:01.708573    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:01.708580    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:01.708584    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:01 GMT
	I0719 12:01:01.708587    4831 round_trippers.go:580]     Audit-Id: 0f065534-6fb4-4385-9ea9-66267e61e0d7
	I0719 12:01:01.708591    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:01.708594    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:01.708774    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:01.708958    4831 node_ready.go:53] node "multinode-871000" has status "Ready":"False"
	I0719 12:01:02.206706    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:02.206726    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:02.206737    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:02.206746    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:02.209339    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:02.209361    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:02.209374    4831 round_trippers.go:580]     Audit-Id: e65067d5-67e9-4674-9522-e48215ef9e7b
	I0719 12:01:02.209381    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:02.209390    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:02.209399    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:02.209408    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:02.209416    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:02 GMT
	I0719 12:01:02.209648    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:02.707234    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:02.707259    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:02.707268    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:02.707273    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:02.710126    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:02.710141    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:02.710149    4831 round_trippers.go:580]     Audit-Id: 3b8ad13c-1ba8-4ec4-8bdc-3c4a7a5b8576
	I0719 12:01:02.710153    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:02.710156    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:02.710159    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:02.710163    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:02.710167    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:02 GMT
	I0719 12:01:02.710634    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:03.206851    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:03.206865    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:03.206871    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:03.206874    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:03.209244    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:03.209255    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:03.209261    4831 round_trippers.go:580]     Audit-Id: 3d163c89-fc3b-4ad8-81fc-eddd33e8b795
	I0719 12:01:03.209264    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:03.209267    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:03.209269    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:03.209272    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:03.209275    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:03 GMT
	I0719 12:01:03.209364    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:03.706624    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:03.706642    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:03.706650    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:03.706655    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:03.708618    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:03.708627    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:03.708633    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:03.708635    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:03 GMT
	I0719 12:01:03.708639    4831 round_trippers.go:580]     Audit-Id: 95e29de8-1653-4a86-92d1-72bd48dd939e
	I0719 12:01:03.708643    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:03.708645    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:03.708648    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:03.708947    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:03.709130    4831 node_ready.go:53] node "multinode-871000" has status "Ready":"False"
	I0719 12:01:04.207051    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:04.207072    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:04.207083    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:04.207090    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:04.209502    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:04.209520    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:04.209528    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:04 GMT
	I0719 12:01:04.209534    4831 round_trippers.go:580]     Audit-Id: 96033790-6d25-4b96-b7c4-29046f0224b4
	I0719 12:01:04.209537    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:04.209540    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:04.209544    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:04.209548    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:04.209618    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:04.707394    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:04.707419    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:04.707431    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:04.707436    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:04.710162    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:04.710180    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:04.710189    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:04.710196    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:04.710202    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:04.710207    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:04.710211    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:04 GMT
	I0719 12:01:04.710214    4831 round_trippers.go:580]     Audit-Id: 08bb4bbb-8b81-4c21-9276-2a88c11ad6ec
	I0719 12:01:04.710525    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:05.206378    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:05.206394    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:05.206401    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:05.206407    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:05.208145    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:05.208158    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:05.208165    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:05.208170    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:05.208178    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:05 GMT
	I0719 12:01:05.208182    4831 round_trippers.go:580]     Audit-Id: 109a3915-a328-4b3a-992b-103724858fb0
	I0719 12:01:05.208186    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:05.208191    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:05.208687    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:05.706790    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:05.706815    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:05.706826    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:05.706831    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:05.709511    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:05.709529    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:05.709540    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:05.709548    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:05.709556    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:05.709560    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:05.709564    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:05 GMT
	I0719 12:01:05.709569    4831 round_trippers.go:580]     Audit-Id: a76ab312-c8c9-4752-9286-ae31851fbdf8
	I0719 12:01:05.709652    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:05.709899    4831 node_ready.go:53] node "multinode-871000" has status "Ready":"False"
	I0719 12:01:06.207255    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:06.207275    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:06.207287    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:06.207292    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:06.209859    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:06.209872    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:06.209879    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:06.209911    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:06.209920    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:06.209926    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:06.209948    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:06 GMT
	I0719 12:01:06.209962    4831 round_trippers.go:580]     Audit-Id: a2098aa3-78e4-466d-b079-2e04d7f652cd
	I0719 12:01:06.210272    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:06.706413    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:06.706437    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:06.706450    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:06.706456    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:06.709140    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:06.709158    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:06.709166    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:06.709170    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:06.709175    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:06.709180    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:06.709183    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:06 GMT
	I0719 12:01:06.709187    4831 round_trippers.go:580]     Audit-Id: 8f00113f-9166-48de-8a53-a97b4e7caff2
	I0719 12:01:06.709601    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:07.206594    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:07.206616    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:07.206627    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:07.206634    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:07.209777    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:01:07.209791    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:07.209798    4831 round_trippers.go:580]     Audit-Id: 40b9a91a-6fa8-417a-8e05-b10024d49aa9
	I0719 12:01:07.209804    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:07.209809    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:07.209814    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:07.209818    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:07.209822    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:07 GMT
	I0719 12:01:07.209922    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:07.706628    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:07.706647    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:07.706658    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:07.706664    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:07.708842    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:07.708855    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:07.708862    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:07.708867    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:07.708870    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:07.708874    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:07 GMT
	I0719 12:01:07.708878    4831 round_trippers.go:580]     Audit-Id: f6d0c2a9-45f4-4321-ad0c-ab21af2da3e2
	I0719 12:01:07.708884    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:07.708952    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:08.206496    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:08.206517    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:08.206530    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:08.206536    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:08.208874    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:08.208889    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:08.208899    4831 round_trippers.go:580]     Audit-Id: 61f21ffb-7227-499e-88a8-7f21eb34b247
	I0719 12:01:08.208905    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:08.208909    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:08.208912    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:08.208916    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:08.208920    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:08 GMT
	I0719 12:01:08.209075    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:08.209318    4831 node_ready.go:53] node "multinode-871000" has status "Ready":"False"
	I0719 12:01:08.707163    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:08.707183    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:08.707196    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:08.707204    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:08.709857    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:08.709873    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:08.709881    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:08 GMT
	I0719 12:01:08.709886    4831 round_trippers.go:580]     Audit-Id: 78633b24-3b54-472f-954b-ec23aa1dd09f
	I0719 12:01:08.709910    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:08.709922    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:08.709926    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:08.709930    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:08.710037    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1015","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0719 12:01:09.207243    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:09.207266    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:09.207277    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:09.207284    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:09.210073    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:09.210091    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:09.210099    4831 round_trippers.go:580]     Audit-Id: 3f41916e-be2c-4c7b-833a-e2f5466f4060
	I0719 12:01:09.210104    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:09.210109    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:09.210113    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:09.210118    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:09.210123    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:09 GMT
	I0719 12:01:09.210188    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1015","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0719 12:01:09.706282    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:09.706298    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:09.706306    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:09.706312    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:09.708273    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:09.708283    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:09.708290    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:09.708295    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:09.708299    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:09.708314    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:09.708322    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:09 GMT
	I0719 12:01:09.708326    4831 round_trippers.go:580]     Audit-Id: 90bdf03f-63c9-4f70-9629-4e4c7bd07af9
	I0719 12:01:09.708452    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1015","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0719 12:01:10.207338    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:10.207354    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.207364    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.207368    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.282827    4831 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0719 12:01:10.282849    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.282859    4831 round_trippers.go:580]     Audit-Id: caf4fac1-a83a-4bb8-be8e-6f22825003d9
	I0719 12:01:10.282865    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.282870    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.282877    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.282883    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.282909    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.283170    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:10.283427    4831 node_ready.go:49] node "multinode-871000" has status "Ready":"True"
	I0719 12:01:10.283445    4831 node_ready.go:38] duration metric: took 10.577375784s for node "multinode-871000" to be "Ready" ...
	I0719 12:01:10.283454    4831 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:01:10.283500    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:10.283507    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.283515    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.283521    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.287540    4831 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 12:01:10.287549    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.287554    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.287558    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.287561    4831 round_trippers.go:580]     Audit-Id: d2878a28-80a5-4830-8db4-c96d17edd26d
	I0719 12:01:10.287564    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.287567    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.287570    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.288724    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1022"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 86042 chars]
	I0719 12:01:10.290530    4831 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:10.290574    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:10.290579    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.290585    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.290589    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.292152    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:10.292163    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.292170    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.292175    4831 round_trippers.go:580]     Audit-Id: c56623d3-29a1-45a1-886e-015c36c704fc
	I0719 12:01:10.292180    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.292184    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.292188    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.292214    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.292309    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:01:10.292534    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:10.292541    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.292546    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.292550    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.297542    4831 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 12:01:10.297551    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.297555    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.297558    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.297562    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.297564    4831 round_trippers.go:580]     Audit-Id: cb0f4ae4-8d95-449d-a991-640c29f4a119
	I0719 12:01:10.297582    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.297589    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.297816    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:10.790876    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:10.790896    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.790908    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.790914    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.795665    4831 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 12:01:10.795675    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.795680    4831 round_trippers.go:580]     Audit-Id: 0f6d05cb-37d4-4bf4-ace8-24384db5dcdd
	I0719 12:01:10.795683    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.795686    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.795689    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.795692    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.795695    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.796103    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:01:10.796386    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:10.796393    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.796399    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.796404    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.798239    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:10.798251    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.798257    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.798260    4831 round_trippers.go:580]     Audit-Id: ada4a7cd-a686-4a06-b343-92e36440b9bb
	I0719 12:01:10.798263    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.798266    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.798268    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.798271    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.798361    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:11.290965    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:11.290985    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:11.290997    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:11.291005    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:11.293891    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:11.293905    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:11.293912    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:11.293916    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:11.293922    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:11.293926    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:11.293930    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:11 GMT
	I0719 12:01:11.293934    4831 round_trippers.go:580]     Audit-Id: db7a77a9-a64e-464d-a9bc-c5bd2e6ba8ef
	I0719 12:01:11.294075    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:01:11.294457    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:11.294467    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:11.294475    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:11.294480    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:11.295889    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:11.295897    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:11.295904    4831 round_trippers.go:580]     Audit-Id: 650d7f12-0ae4-42e4-9a1b-faca3f71edb1
	I0719 12:01:11.295909    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:11.295913    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:11.295916    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:11.295922    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:11.295937    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:11 GMT
	I0719 12:01:11.296121    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:11.791088    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:11.791116    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:11.791128    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:11.791134    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:11.794310    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:01:11.794325    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:11.794332    4831 round_trippers.go:580]     Audit-Id: 10429346-c51d-4881-9730-d05f7fad3d89
	I0719 12:01:11.794338    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:11.794342    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:11.794347    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:11.794351    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:11.794355    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:11 GMT
	I0719 12:01:11.794462    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:01:11.794822    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:11.794831    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:11.794839    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:11.794843    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:11.796226    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:11.796237    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:11.796244    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:11.796269    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:11.796283    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:11.796291    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:11.796296    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:11 GMT
	I0719 12:01:11.796301    4831 round_trippers.go:580]     Audit-Id: 391906f7-97f9-4d14-af6d-6a2218ad5f6e
	I0719 12:01:11.796511    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.291216    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:12.291230    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.291236    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.291239    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.292860    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.292871    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.292879    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.292885    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.292890    4831 round_trippers.go:580]     Audit-Id: 0cf802fe-2359-4e52-8dd2-e0cedb5bd98d
	I0719 12:01:12.292897    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.292901    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.292904    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.293071    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:01:12.293358    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:12.293365    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.293371    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.293374    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.294500    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.294511    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.294518    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.294523    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.294527    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.294551    4831 round_trippers.go:580]     Audit-Id: 9e735d3b-437d-43f5-8d0a-c6ee0f179e73
	I0719 12:01:12.294564    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.294567    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.294747    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.294928    4831 pod_ready.go:102] pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace has status "Ready":"False"
	I0719 12:01:12.790762    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:12.790786    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.790871    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.790879    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.793624    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:12.793636    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.793644    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.793649    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.793654    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.793660    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.793664    4831 round_trippers.go:580]     Audit-Id: 0045a688-0708-4e3c-be61-8812c76c6f1d
	I0719 12:01:12.793668    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.794110    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"1037","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0719 12:01:12.794469    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:12.794479    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.794487    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.794494    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.795667    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.795678    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.795685    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.795688    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.795691    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.795697    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.795700    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.795703    4831 round_trippers.go:580]     Audit-Id: b100dbbf-f10d-44a9-ad86-a6a01c66e107
	I0719 12:01:12.795998    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.796168    4831 pod_ready.go:92] pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:12.796177    4831 pod_ready.go:81] duration metric: took 2.505644651s for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.796184    4831 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.796215    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-871000
	I0719 12:01:12.796220    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.796225    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.796228    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.797357    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.797363    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.797369    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.797375    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.797379    4831 round_trippers.go:580]     Audit-Id: 387b09b2-27b5-487f-b615-79b935091495
	I0719 12:01:12.797383    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.797386    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.797390    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.797504    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-871000","namespace":"kube-system","uid":"8818ed52-4b2d-4629-af02-b835e3cfa034","resourceVersion":"1020","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.mirror":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.seen":"2024-07-19T18:55:05.740545259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0719 12:01:12.797750    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:12.797756    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.797761    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.797765    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.798727    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:12.798735    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.798738    4831 round_trippers.go:580]     Audit-Id: b297bcdc-4feb-4b3f-bb2c-d6130a7fa690
	I0719 12:01:12.798744    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.798749    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.798754    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.798757    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.798760    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.798893    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.799060    4831 pod_ready.go:92] pod "etcd-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:12.799068    4831 pod_ready.go:81] duration metric: took 2.880399ms for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.799079    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.799110    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-871000
	I0719 12:01:12.799115    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.799121    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.799125    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.800133    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.800141    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.800146    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.800151    4831 round_trippers.go:580]     Audit-Id: b5716c0f-c9c6-4af8-b292-df106b436d3f
	I0719 12:01:12.800154    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.800156    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.800159    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.800162    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.800362    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-871000","namespace":"kube-system","uid":"9f3fdf92-3cbd-411c-802e-cbbbe1b60d68","resourceVersion":"993","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.mirror":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548209Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0719 12:01:12.800583    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:12.800590    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.800596    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.800599    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.801792    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.801800    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.801806    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.801810    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.801815    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.801821    4831 round_trippers.go:580]     Audit-Id: e5d5104d-a8cb-4f54-ad9e-478edf166f20
	I0719 12:01:12.801824    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.801827    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.802112    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.802278    4831 pod_ready.go:92] pod "kube-apiserver-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:12.802286    4831 pod_ready.go:81] duration metric: took 3.202194ms for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.802292    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.802323    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-871000
	I0719 12:01:12.802328    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.802333    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.802338    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.803424    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.803433    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.803440    4831 round_trippers.go:580]     Audit-Id: cf39fe9d-f2f6-4e8c-9b66-91c57ad62fd7
	I0719 12:01:12.803447    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.803452    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.803457    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.803461    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.803463    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.803593    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-871000","namespace":"kube-system","uid":"74e143fb-26b8-4d1d-b07a-f1b2c590133f","resourceVersion":"1003","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.mirror":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548943Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0719 12:01:12.803813    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:12.803821    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.803827    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.803831    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.804772    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:12.804778    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.804783    4831 round_trippers.go:580]     Audit-Id: b38cc3fb-2b02-4389-983c-73b9cdbaf280
	I0719 12:01:12.804786    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.804789    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.804792    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.804794    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.804797    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.804917    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.805076    4831 pod_ready.go:92] pod "kube-controller-manager-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:12.805084    4831 pod_ready.go:81] duration metric: took 2.786528ms for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.805091    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.805129    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-86ssb
	I0719 12:01:12.805134    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.805140    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.805144    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.806159    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.806165    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.806170    4831 round_trippers.go:580]     Audit-Id: b9f9807c-33dc-45f2-a9ee-5b2429b13d2f
	I0719 12:01:12.806173    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.806175    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.806177    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.806195    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.806199    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.806337    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-86ssb","generateName":"kube-proxy-","namespace":"kube-system","uid":"37609942-98d8-4c6b-b339-53bf3a901e3f","resourceVersion":"862","creationTimestamp":"2024-07-19T18:57:03Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0719 12:01:12.806563    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m03
	I0719 12:01:12.806570    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.806575    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.806577    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.807514    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:12.807521    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.807526    4831 round_trippers.go:580]     Audit-Id: c12be27b-efb2-4b62-b73b-fede6b2d8f0d
	I0719 12:01:12.807529    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.807532    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.807534    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.807538    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.807541    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.807657    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m03","uid":"4745805a-e01a-4411-b942-abcd092662c6","resourceVersion":"889","creationTimestamp":"2024-07-19T18:59:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T11_59_53_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:59:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3641 chars]
	I0719 12:01:12.807802    4831 pod_ready.go:92] pod "kube-proxy-86ssb" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:12.807809    4831 pod_ready.go:81] duration metric: took 2.713615ms for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.807816    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.992186    4831 request.go:629] Waited for 184.332224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:01:12.992306    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:01:12.992316    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.992325    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.992331    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.994688    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:12.994702    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.994712    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.994717    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.994721    4831 round_trippers.go:580]     Audit-Id: 0bac3d21-7e17-46c8-bc0e-3cd668703a12
	I0719 12:01:12.994725    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.994729    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.994734    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.994885    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-89hm2","generateName":"kube-proxy-","namespace":"kube-system","uid":"77b4b485-53f0-4480-8b62-a1df26f037b8","resourceVersion":"979","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0719 12:01:13.191224    4831 request.go:629] Waited for 195.982643ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:13.191334    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:13.191346    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:13.191357    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:13.191367    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:13.194050    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:13.194067    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:13.194075    4831 round_trippers.go:580]     Audit-Id: b8c02717-0677-4f98-b81c-a32c519ebf7f
	I0719 12:01:13.194079    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:13.194082    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:13.194086    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:13.194090    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:13.194093    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:13.194461    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:13.194710    4831 pod_ready.go:92] pod "kube-proxy-89hm2" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:13.194722    4831 pod_ready.go:81] duration metric: took 386.901578ms for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:13.194732    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:13.391733    4831 request.go:629] Waited for 196.93835ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:01:13.391899    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:01:13.391910    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:13.391920    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:13.391925    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:13.394807    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:13.394822    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:13.394830    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:13.394834    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:13.394839    4831 round_trippers.go:580]     Audit-Id: 3bc89593-7e91-45ac-abd2-9679c98d2d42
	I0719 12:01:13.394842    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:13.394846    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:13.394849    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:13.394917    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t9bqq","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ef191fc-6e2e-486c-b825-76c6e0d95416","resourceVersion":"523","creationTimestamp":"2024-07-19T18:56:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:56:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0719 12:01:13.591577    4831 request.go:629] Waited for 196.343729ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:13.591658    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:13.591664    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:13.591670    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:13.591674    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:13.593173    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:13.593183    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:13.593188    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:13.593191    4831 round_trippers.go:580]     Audit-Id: 522e877a-9212-412f-a1c2-e249824e8f02
	I0719 12:01:13.593194    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:13.593197    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:13.593200    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:13.593203    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:13.593271    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"e0450b58-f42e-4eee-a22b-05f89b4b721d","resourceVersion":"589","creationTimestamp":"2024-07-19T18:56:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T11_56_14_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:56:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0719 12:01:13.593446    4831 pod_ready.go:92] pod "kube-proxy-t9bqq" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:13.593454    4831 pod_ready.go:81] duration metric: took 398.717667ms for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:13.593461    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:13.791188    4831 request.go:629] Waited for 197.685056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:01:13.791332    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:01:13.791342    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:13.791353    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:13.791360    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:13.794105    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:13.794117    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:13.794124    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:13.794132    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:13.794138    4831 round_trippers.go:580]     Audit-Id: f030cc1f-b164-4ff8-b0ab-d1a4c9277014
	I0719 12:01:13.794161    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:13.794171    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:13.794179    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:13.794371    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-871000","namespace":"kube-system","uid":"0d73182a-0458-470e-ac06-ccde27fa113a","resourceVersion":"1012","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.mirror":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.seen":"2024-07-19T18:55:00.040869314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0719 12:01:13.991410    4831 request.go:629] Waited for 196.656272ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:13.991481    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:13.991492    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:13.991505    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:13.991512    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:13.994043    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:13.994058    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:13.994065    4831 round_trippers.go:580]     Audit-Id: 80eadb07-ea7b-4672-8daa-303e15c367f0
	I0719 12:01:13.994108    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:13.994115    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:13.994119    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:13.994124    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:13.994128    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:13.994199    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:13.994444    4831 pod_ready.go:92] pod "kube-scheduler-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:13.994456    4831 pod_ready.go:81] duration metric: took 400.990327ms for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:13.994465    4831 pod_ready.go:38] duration metric: took 3.711014187s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:01:13.994480    4831 api_server.go:52] waiting for apiserver process to appear ...
	I0719 12:01:13.994566    4831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:01:14.007526    4831 command_runner.go:130] > 1608
	I0719 12:01:14.007547    4831 api_server.go:72] duration metric: took 14.536621271s to wait for apiserver process to appear ...
	I0719 12:01:14.007554    4831 api_server.go:88] waiting for apiserver healthz status ...
	I0719 12:01:14.007564    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:01:14.011279    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0719 12:01:14.011309    4831 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0719 12:01:14.011314    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:14.011320    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:14.011325    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:14.011853    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:14.011863    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:14.011869    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:14.011873    4831 round_trippers.go:580]     Audit-Id: f1f852e0-9756-4fad-8aa2-5050cf2e389f
	I0719 12:01:14.011877    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:14.011880    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:14.011883    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:14.011887    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:14.011890    4831 round_trippers.go:580]     Content-Length: 263
	I0719 12:01:14.011899    4831 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0719 12:01:14.011920    4831 api_server.go:141] control plane version: v1.30.3
	I0719 12:01:14.011927    4831 api_server.go:131] duration metric: took 4.369782ms to wait for apiserver health ...
	I0719 12:01:14.011933    4831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 12:01:14.192306    4831 request.go:629] Waited for 180.314222ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:14.192382    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:14.192472    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:14.192486    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:14.192494    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:14.196552    4831 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 12:01:14.196568    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:14.196580    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:14.196587    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:14.196593    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:14.196615    4831 round_trippers.go:580]     Audit-Id: 5cc0f340-9362-4237-9188-a424d0f8a1de
	I0719 12:01:14.196630    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:14.196640    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:14.197378    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1045"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"1037","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85990 chars]
	I0719 12:01:14.199216    4831 system_pods.go:59] 12 kube-system pods found
	I0719 12:01:14.199226    4831 system_pods.go:61] "coredns-7db6d8ff4d-85r26" [c7d62ec5-693b-46ab-9437-86aef8b469e8] Running
	I0719 12:01:14.199233    4831 system_pods.go:61] "etcd-multinode-871000" [8818ed52-4b2d-4629-af02-b835e3cfa034] Running
	I0719 12:01:14.199237    4831 system_pods.go:61] "kindnet-4stbd" [58fb2d63-07bb-4a27-87c5-4e259083f5be] Running
	I0719 12:01:14.199240    4831 system_pods.go:61] "kindnet-897rz" [a3c96d7b-9aa1-49e1-9fa6-8aad9551be4f] Running
	I0719 12:01:14.199243    4831 system_pods.go:61] "kindnet-hht5h" [f1a7b402-0cf3-469c-8124-6b53aa34f4c7] Running
	I0719 12:01:14.199245    4831 system_pods.go:61] "kube-apiserver-multinode-871000" [9f3fdf92-3cbd-411c-802e-cbbbe1b60d68] Running
	I0719 12:01:14.199248    4831 system_pods.go:61] "kube-controller-manager-multinode-871000" [74e143fb-26b8-4d1d-b07a-f1b2c590133f] Running
	I0719 12:01:14.199251    4831 system_pods.go:61] "kube-proxy-86ssb" [37609942-98d8-4c6b-b339-53bf3a901e3f] Running
	I0719 12:01:14.199253    4831 system_pods.go:61] "kube-proxy-89hm2" [77b4b485-53f0-4480-8b62-a1df26f037b8] Running
	I0719 12:01:14.199255    4831 system_pods.go:61] "kube-proxy-t9bqq" [5ef191fc-6e2e-486c-b825-76c6e0d95416] Running
	I0719 12:01:14.199258    4831 system_pods.go:61] "kube-scheduler-multinode-871000" [0d73182a-0458-470e-ac06-ccde27fa113a] Running
	I0719 12:01:14.199261    4831 system_pods.go:61] "storage-provisioner" [ccd0aaec-abf0-4aec-9ebf-14f619510aeb] Running
	I0719 12:01:14.199265    4831 system_pods.go:74] duration metric: took 187.329082ms to wait for pod list to return data ...
	I0719 12:01:14.199270    4831 default_sa.go:34] waiting for default service account to be created ...
	I0719 12:01:14.391741    4831 request.go:629] Waited for 192.421328ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0719 12:01:14.391843    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0719 12:01:14.391859    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:14.391895    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:14.391906    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:14.394636    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:14.394649    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:14.394656    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:14.394682    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:14.394691    4831 round_trippers.go:580]     Content-Length: 262
	I0719 12:01:14.394695    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:14.394700    4831 round_trippers.go:580]     Audit-Id: 6b235191-1b10-4814-9114-175c0be567bc
	I0719 12:01:14.394703    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:14.394707    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:14.394720    4831 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1045"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ccdcd62c-500a-4785-b87e-b6abf5989afc","resourceVersion":"363","creationTimestamp":"2024-07-19T18:55:20Z"}}]}
	I0719 12:01:14.394861    4831 default_sa.go:45] found service account: "default"
	I0719 12:01:14.394873    4831 default_sa.go:55] duration metric: took 195.598234ms for default service account to be created ...
	I0719 12:01:14.394879    4831 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 12:01:14.590815    4831 request.go:629] Waited for 195.900423ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:14.590861    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:14.590866    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:14.590872    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:14.590877    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:14.594875    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:01:14.594907    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:14.594915    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:14.594918    4831 round_trippers.go:580]     Audit-Id: 03c851a1-add9-4157-91e1-7326271475b6
	I0719 12:01:14.594920    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:14.594923    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:14.594926    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:14.594928    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:14.596077    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1045"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"1037","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85990 chars]
	I0719 12:01:14.597907    4831 system_pods.go:86] 12 kube-system pods found
	I0719 12:01:14.597917    4831 system_pods.go:89] "coredns-7db6d8ff4d-85r26" [c7d62ec5-693b-46ab-9437-86aef8b469e8] Running
	I0719 12:01:14.597922    4831 system_pods.go:89] "etcd-multinode-871000" [8818ed52-4b2d-4629-af02-b835e3cfa034] Running
	I0719 12:01:14.597926    4831 system_pods.go:89] "kindnet-4stbd" [58fb2d63-07bb-4a27-87c5-4e259083f5be] Running
	I0719 12:01:14.597929    4831 system_pods.go:89] "kindnet-897rz" [a3c96d7b-9aa1-49e1-9fa6-8aad9551be4f] Running
	I0719 12:01:14.597933    4831 system_pods.go:89] "kindnet-hht5h" [f1a7b402-0cf3-469c-8124-6b53aa34f4c7] Running
	I0719 12:01:14.597936    4831 system_pods.go:89] "kube-apiserver-multinode-871000" [9f3fdf92-3cbd-411c-802e-cbbbe1b60d68] Running
	I0719 12:01:14.597941    4831 system_pods.go:89] "kube-controller-manager-multinode-871000" [74e143fb-26b8-4d1d-b07a-f1b2c590133f] Running
	I0719 12:01:14.597944    4831 system_pods.go:89] "kube-proxy-86ssb" [37609942-98d8-4c6b-b339-53bf3a901e3f] Running
	I0719 12:01:14.597948    4831 system_pods.go:89] "kube-proxy-89hm2" [77b4b485-53f0-4480-8b62-a1df26f037b8] Running
	I0719 12:01:14.597951    4831 system_pods.go:89] "kube-proxy-t9bqq" [5ef191fc-6e2e-486c-b825-76c6e0d95416] Running
	I0719 12:01:14.597955    4831 system_pods.go:89] "kube-scheduler-multinode-871000" [0d73182a-0458-470e-ac06-ccde27fa113a] Running
	I0719 12:01:14.597958    4831 system_pods.go:89] "storage-provisioner" [ccd0aaec-abf0-4aec-9ebf-14f619510aeb] Running
	I0719 12:01:14.597963    4831 system_pods.go:126] duration metric: took 203.07922ms to wait for k8s-apps to be running ...
	I0719 12:01:14.597968    4831 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 12:01:14.598021    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 12:01:14.610118    4831 system_svc.go:56] duration metric: took 12.144912ms WaitForService to wait for kubelet
	I0719 12:01:14.610131    4831 kubeadm.go:582] duration metric: took 15.139206631s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:01:14.610143    4831 node_conditions.go:102] verifying NodePressure condition ...
	I0719 12:01:14.790946    4831 request.go:629] Waited for 180.72086ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0719 12:01:14.790995    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0719 12:01:14.791003    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:14.791013    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:14.791021    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:14.793367    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:14.793382    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:14.793389    4831 round_trippers.go:580]     Audit-Id: 46f01e81-0ba1-4bdd-80b4-c4cfb8c76e66
	I0719 12:01:14.793395    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:14.793407    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:14.793412    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:14.793417    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:14.793420    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:14.793548    4831 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1045"},"items":[{"metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14677 chars]
	I0719 12:01:14.793977    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:14.793988    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:14.794015    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:14.794018    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:14.794021    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:14.794030    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:14.794035    4831 node_conditions.go:105] duration metric: took 183.888782ms to run NodePressure ...
	I0719 12:01:14.794043    4831 start.go:241] waiting for startup goroutines ...
	I0719 12:01:14.794048    4831 start.go:246] waiting for cluster config update ...
	I0719 12:01:14.794054    4831 start.go:255] writing updated cluster config ...
	I0719 12:01:14.819775    4831 out.go:177] 
	I0719 12:01:14.839829    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:14.839957    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:14.862749    4831 out.go:177] * Starting "multinode-871000-m02" worker node in "multinode-871000" cluster
	I0719 12:01:14.904611    4831 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:01:14.904645    4831 cache.go:56] Caching tarball of preloaded images
	I0719 12:01:14.904836    4831 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 12:01:14.904854    4831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:01:14.904983    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:14.905875    4831 start.go:360] acquireMachinesLock for multinode-871000-m02: {Name:mk9f33e92e6d472bd2fb7a1dc1c9d72253ce59c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:01:14.905953    4831 start.go:364] duration metric: took 62.487µs to acquireMachinesLock for "multinode-871000-m02"
	I0719 12:01:14.905971    4831 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:01:14.905977    4831 fix.go:54] fixHost starting: m02
	I0719 12:01:14.906292    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:14.906309    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:14.915286    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53190
	I0719 12:01:14.915632    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:14.916090    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:14.916110    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:14.916371    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:14.916495    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:14.916591    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetState
	I0719 12:01:14.916684    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:14.916776    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid from json: 4223
	I0719 12:01:14.917680    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid 4223 missing from process table
	I0719 12:01:14.917704    4831 fix.go:112] recreateIfNeeded on multinode-871000-m02: state=Stopped err=<nil>
	I0719 12:01:14.917720    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	W0719 12:01:14.917799    4831 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:01:14.938957    4831 out.go:177] * Restarting existing hyperkit VM for "multinode-871000-m02" ...
	I0719 12:01:14.980789    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .Start
	I0719 12:01:14.981060    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:14.981135    4831 main.go:141] libmachine: (multinode-871000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/hyperkit.pid
	I0719 12:01:14.982852    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid 4223 missing from process table
	I0719 12:01:14.982872    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | pid 4223 is in state "Stopped"
	I0719 12:01:14.982892    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/hyperkit.pid...
	I0719 12:01:14.983111    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Using UUID 0156b6d9-fc48-4ae8-8601-a045f8c107f0
	I0719 12:01:15.009357    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Generated MAC 36:3f:5c:47:18:4c
	I0719 12:01:15.009376    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000
	I0719 12:01:15.009509    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0156b6d9-fc48-4ae8-8601-a045f8c107f0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acba0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0719 12:01:15.009548    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0156b6d9-fc48-4ae8-8601-a045f8c107f0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acba0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0719 12:01:15.009584    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0156b6d9-fc48-4ae8-8601-a045f8c107f0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/multinode-871000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/bzimage,/Users/j
enkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"}
	I0719 12:01:15.009629    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0156b6d9-fc48-4ae8-8601-a045f8c107f0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/multinode-871000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/bzimage,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/mult
inode-871000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"
	I0719 12:01:15.009640    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0719 12:01:15.010985    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: Pid is 4857
	I0719 12:01:15.011511    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Attempt 0
	I0719 12:01:15.011532    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:15.011608    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid from json: 4857
	I0719 12:01:15.013370    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Searching for 36:3f:5c:47:18:4c in /var/db/dhcpd_leases ...
	I0719 12:01:15.013439    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0719 12:01:15.013473    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:4c:c6:88:73:ec ID:1,f2:4c:c6:88:73:ec Lease:0x669c0959}
	I0719 12:01:15.013498    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:5e:a3:f5:89:e4:9e ID:1,5e:a3:f5:89:e4:9e Lease:0x669ab7be}
	I0719 12:01:15.013511    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:36:3f:5c:47:18:4c ID:1,36:3f:5c:47:18:4c Lease:0x669c0844}
	I0719 12:01:15.013532    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Found match: 36:3f:5c:47:18:4c
	I0719 12:01:15.013549    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetConfigRaw
	I0719 12:01:15.013567    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | IP: 192.169.0.18
	I0719 12:01:15.014251    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetIP
	I0719 12:01:15.014429    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:15.014874    4831 machine.go:94] provisionDockerMachine start ...
	I0719 12:01:15.014884    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:15.014993    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:15.015109    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:15.015233    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:15.015390    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:15.015491    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:15.015629    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:15.015797    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:15.015805    4831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 12:01:15.019046    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0719 12:01:15.027408    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0719 12:01:15.028366    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:01:15.028394    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:01:15.028413    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:01:15.028431    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:01:15.407850    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0719 12:01:15.407881    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0719 12:01:15.522578    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:01:15.522610    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:01:15.522647    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:01:15.522666    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:01:15.523454    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0719 12:01:15.523463    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0719 12:01:20.789057    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0719 12:01:20.789103    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0719 12:01:20.789119    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0719 12:01:20.812558    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0719 12:01:26.077593    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 12:01:26.077620    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetMachineName
	I0719 12:01:26.077758    4831 buildroot.go:166] provisioning hostname "multinode-871000-m02"
	I0719 12:01:26.077770    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetMachineName
	I0719 12:01:26.077854    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.077950    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.078032    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.078110    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.078209    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.078331    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:26.078487    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:26.078495    4831 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-871000-m02 && echo "multinode-871000-m02" | sudo tee /etc/hostname
	I0719 12:01:26.141205    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-871000-m02
	
	I0719 12:01:26.141220    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.141353    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.141448    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.141540    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.141624    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.141773    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:26.141918    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:26.141929    4831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-871000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-871000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-871000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 12:01:26.198278    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 12:01:26.198293    4831 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19307-1053/.minikube CaCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19307-1053/.minikube}
	I0719 12:01:26.198303    4831 buildroot.go:174] setting up certificates
	I0719 12:01:26.198311    4831 provision.go:84] configureAuth start
	I0719 12:01:26.198318    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetMachineName
	I0719 12:01:26.198456    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetIP
	I0719 12:01:26.198558    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.198640    4831 provision.go:143] copyHostCerts
	I0719 12:01:26.198668    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:01:26.198739    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem, removing ...
	I0719 12:01:26.198745    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:01:26.198894    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem (1078 bytes)
	I0719 12:01:26.199110    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:01:26.199156    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem, removing ...
	I0719 12:01:26.199161    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:01:26.199243    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem (1123 bytes)
	I0719 12:01:26.199401    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:01:26.199444    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem, removing ...
	I0719 12:01:26.199448    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:01:26.199527    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem (1675 bytes)
	I0719 12:01:26.199723    4831 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem org=jenkins.multinode-871000-m02 san=[127.0.0.1 192.169.0.18 localhost minikube multinode-871000-m02]
	I0719 12:01:26.273916    4831 provision.go:177] copyRemoteCerts
	I0719 12:01:26.274023    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 12:01:26.274064    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.274305    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.274464    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.274572    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.274695    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 12:01:26.306988    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 12:01:26.307065    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 12:01:26.326746    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 12:01:26.326814    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0719 12:01:26.346699    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 12:01:26.346769    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 12:01:26.366495    4831 provision.go:87] duration metric: took 168.170131ms to configureAuth
	I0719 12:01:26.366512    4831 buildroot.go:189] setting minikube options for container-runtime
	I0719 12:01:26.366695    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:26.366729    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:26.366857    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.366952    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.367039    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.367107    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.367195    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.367303    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:26.367432    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:26.367440    4831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 12:01:26.418614    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 12:01:26.418627    4831 buildroot.go:70] root file system type: tmpfs
	I0719 12:01:26.418710    4831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 12:01:26.418723    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.418852    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.418949    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.419039    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.419125    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.419272    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:26.419413    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:26.419458    4831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 12:01:26.480650    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 12:01:26.480669    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.480800    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.480882    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.480980    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.481075    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.481207    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:26.481350    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:26.481362    4831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 12:01:28.067920    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 12:01:28.067935    4831 machine.go:97] duration metric: took 13.053095328s to provisionDockerMachine
	I0719 12:01:28.067943    4831 start.go:293] postStartSetup for "multinode-871000-m02" (driver="hyperkit")
	I0719 12:01:28.067950    4831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 12:01:28.067960    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:28.068163    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 12:01:28.068176    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:28.068286    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:28.068373    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:28.068471    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:28.068569    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 12:01:28.110906    4831 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 12:01:28.115909    4831 command_runner.go:130] > NAME=Buildroot
	I0719 12:01:28.115920    4831 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 12:01:28.115924    4831 command_runner.go:130] > ID=buildroot
	I0719 12:01:28.115928    4831 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 12:01:28.115931    4831 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 12:01:28.115959    4831 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 12:01:28.115967    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/addons for local assets ...
	I0719 12:01:28.116068    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/files for local assets ...
	I0719 12:01:28.116252    4831 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> 15922.pem in /etc/ssl/certs
	I0719 12:01:28.116258    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> /etc/ssl/certs/15922.pem
	I0719 12:01:28.116464    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 12:01:28.125931    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem --> /etc/ssl/certs/15922.pem (1708 bytes)
	I0719 12:01:28.152964    4831 start.go:296] duration metric: took 85.012579ms for postStartSetup
	I0719 12:01:28.152986    4831 fix.go:56] duration metric: took 13.247051958s for fixHost
	I0719 12:01:28.153002    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:28.153136    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:28.153266    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:28.153362    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:28.153456    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:28.153586    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:28.153727    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:28.153734    4831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 12:01:28.206346    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415688.390002539
	
	I0719 12:01:28.206357    4831 fix.go:216] guest clock: 1721415688.390002539
	I0719 12:01:28.206362    4831 fix.go:229] Guest: 2024-07-19 12:01:28.390002539 -0700 PDT Remote: 2024-07-19 12:01:28.152992 -0700 PDT m=+55.787755802 (delta=237.010539ms)
	I0719 12:01:28.206372    4831 fix.go:200] guest clock delta is within tolerance: 237.010539ms
	I0719 12:01:28.206376    4831 start.go:83] releasing machines lock for "multinode-871000-m02", held for 13.300458195s
	I0719 12:01:28.206393    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:28.206508    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetIP
	I0719 12:01:28.227092    4831 out.go:177] * Found network options:
	I0719 12:01:28.247879    4831 out.go:177]   - NO_PROXY=192.169.0.16
	W0719 12:01:28.270003    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 12:01:28.270061    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:28.270952    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:28.271221    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:28.271323    4831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 12:01:28.271368    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	W0719 12:01:28.271470    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 12:01:28.271569    4831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 12:01:28.271581    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:28.271591    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:28.271788    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:28.271826    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:28.272010    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:28.272025    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:28.272179    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 12:01:28.272206    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:28.272354    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 12:01:28.300976    4831 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 12:01:28.301000    4831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 12:01:28.301059    4831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 12:01:28.350887    4831 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 12:01:28.351698    4831 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 12:01:28.351720    4831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 12:01:28.351729    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:01:28.351804    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:01:28.366743    4831 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 12:01:28.367005    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 12:01:28.375738    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 12:01:28.384457    4831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 12:01:28.384505    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 12:01:28.393286    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:01:28.401942    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 12:01:28.410897    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:01:28.419464    4831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 12:01:28.428431    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 12:01:28.437254    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 12:01:28.445904    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 12:01:28.454819    4831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 12:01:28.462772    4831 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 12:01:28.462879    4831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 12:01:28.471061    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:28.570246    4831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 12:01:28.587401    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:01:28.587479    4831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 12:01:28.607311    4831 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 12:01:28.607781    4831 command_runner.go:130] > [Unit]
	I0719 12:01:28.607796    4831 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 12:01:28.607804    4831 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 12:01:28.607810    4831 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 12:01:28.607814    4831 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 12:01:28.607818    4831 command_runner.go:130] > StartLimitBurst=3
	I0719 12:01:28.607822    4831 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 12:01:28.607825    4831 command_runner.go:130] > [Service]
	I0719 12:01:28.607830    4831 command_runner.go:130] > Type=notify
	I0719 12:01:28.607833    4831 command_runner.go:130] > Restart=on-failure
	I0719 12:01:28.607837    4831 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16
	I0719 12:01:28.607843    4831 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 12:01:28.607854    4831 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 12:01:28.607861    4831 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 12:01:28.607866    4831 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 12:01:28.607872    4831 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 12:01:28.607887    4831 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 12:01:28.607899    4831 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 12:01:28.607912    4831 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 12:01:28.607918    4831 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 12:01:28.607922    4831 command_runner.go:130] > ExecStart=
	I0719 12:01:28.607940    4831 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0719 12:01:28.607945    4831 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 12:01:28.607952    4831 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 12:01:28.607958    4831 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 12:01:28.607961    4831 command_runner.go:130] > LimitNOFILE=infinity
	I0719 12:01:28.607967    4831 command_runner.go:130] > LimitNPROC=infinity
	I0719 12:01:28.607973    4831 command_runner.go:130] > LimitCORE=infinity
	I0719 12:01:28.608000    4831 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 12:01:28.608006    4831 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 12:01:28.608009    4831 command_runner.go:130] > TasksMax=infinity
	I0719 12:01:28.608013    4831 command_runner.go:130] > TimeoutStartSec=0
	I0719 12:01:28.608018    4831 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 12:01:28.608024    4831 command_runner.go:130] > Delegate=yes
	I0719 12:01:28.608029    4831 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 12:01:28.608037    4831 command_runner.go:130] > KillMode=process
	I0719 12:01:28.608041    4831 command_runner.go:130] > [Install]
	I0719 12:01:28.608045    4831 command_runner.go:130] > WantedBy=multi-user.target
	I0719 12:01:28.608155    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:01:28.620507    4831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 12:01:28.641361    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:01:28.652511    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:01:28.663646    4831 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 12:01:28.685363    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:01:28.696159    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:01:28.711054    4831 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 12:01:28.711285    4831 ssh_runner.go:195] Run: which cri-dockerd
	I0719 12:01:28.714140    4831 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 12:01:28.714324    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 12:01:28.721655    4831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 12:01:28.734998    4831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 12:01:28.834910    4831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 12:01:28.951493    4831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 12:01:28.951517    4831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 12:01:28.966896    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:29.068681    4831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 12:01:31.353955    4831 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.285262248s)
	I0719 12:01:31.354021    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 12:01:31.365222    4831 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 12:01:31.379213    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 12:01:31.390372    4831 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 12:01:31.491779    4831 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 12:01:31.583591    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:31.682747    4831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 12:01:31.696512    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 12:01:31.708413    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:31.803185    4831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 12:01:31.860134    4831 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 12:01:31.860208    4831 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 12:01:31.864363    4831 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 12:01:31.864377    4831 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 12:01:31.864382    4831 command_runner.go:130] > Device: 0,22	Inode: 770         Links: 1
	I0719 12:01:31.864387    4831 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0719 12:01:31.864391    4831 command_runner.go:130] > Access: 2024-07-19 19:01:32.000301231 +0000
	I0719 12:01:31.864402    4831 command_runner.go:130] > Modify: 2024-07-19 19:01:32.000301231 +0000
	I0719 12:01:31.864407    4831 command_runner.go:130] > Change: 2024-07-19 19:01:32.002301069 +0000
	I0719 12:01:31.864410    4831 command_runner.go:130] >  Birth: -
	I0719 12:01:31.864580    4831 start.go:563] Will wait 60s for crictl version
	I0719 12:01:31.864627    4831 ssh_runner.go:195] Run: which crictl
	I0719 12:01:31.867575    4831 command_runner.go:130] > /usr/bin/crictl
	I0719 12:01:31.867685    4831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 12:01:31.895735    4831 command_runner.go:130] > Version:  0.1.0
	I0719 12:01:31.895747    4831 command_runner.go:130] > RuntimeName:  docker
	I0719 12:01:31.895838    4831 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 12:01:31.895891    4831 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 12:01:31.897011    4831 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 12:01:31.897077    4831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 12:01:31.913433    4831 command_runner.go:130] > 27.0.3
	I0719 12:01:31.914376    4831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 12:01:31.930760    4831 command_runner.go:130] > 27.0.3
	I0719 12:01:31.952942    4831 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 12:01:31.974894    4831 out.go:177]   - env NO_PROXY=192.169.0.16
	I0719 12:01:31.995734    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetIP
	I0719 12:01:31.996154    4831 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0719 12:01:32.000442    4831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 12:01:32.010582    4831 mustload.go:65] Loading cluster: multinode-871000
	I0719 12:01:32.010758    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:32.010982    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.010997    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.019725    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53211
	I0719 12:01:32.020067    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.020413    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.020429    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.020621    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.020736    4831 main.go:141] libmachine: (multinode-871000) Calling .GetState
	I0719 12:01:32.020818    4831 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:32.020914    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid from json: 4843
	I0719 12:01:32.021858    4831 host.go:66] Checking if "multinode-871000" exists ...
	I0719 12:01:32.022124    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.022145    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.030713    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53213
	I0719 12:01:32.031056    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.031393    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.031404    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.031586    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.031700    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:01:32.031802    4831 certs.go:68] Setting up /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000 for IP: 192.169.0.18
	I0719 12:01:32.031808    4831 certs.go:194] generating shared ca certs ...
	I0719 12:01:32.031820    4831 certs.go:226] acquiring lock for ca certs: {Name:mk78732514e475c67b8a22bdfb9da389d614aef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:01:32.031981    4831 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key
	I0719 12:01:32.032057    4831 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key
	I0719 12:01:32.032067    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 12:01:32.032088    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 12:01:32.032107    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 12:01:32.032125    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 12:01:32.032218    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem (1338 bytes)
	W0719 12:01:32.032269    4831 certs.go:480] ignoring /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592_empty.pem, impossibly tiny 0 bytes
	I0719 12:01:32.032280    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 12:01:32.032314    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem (1078 bytes)
	I0719 12:01:32.032349    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem (1123 bytes)
	I0719 12:01:32.032378    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem (1675 bytes)
	I0719 12:01:32.032472    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem (1708 bytes)
	I0719 12:01:32.032507    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem -> /usr/share/ca-certificates/1592.pem
	I0719 12:01:32.032528    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> /usr/share/ca-certificates/15922.pem
	I0719 12:01:32.032551    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:01:32.032576    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 12:01:32.052233    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 12:01:32.071854    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 12:01:32.091255    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 12:01:32.110572    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem --> /usr/share/ca-certificates/1592.pem (1338 bytes)
	I0719 12:01:32.129684    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem --> /usr/share/ca-certificates/15922.pem (1708 bytes)
	I0719 12:01:32.148789    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 12:01:32.167899    4831 ssh_runner.go:195] Run: openssl version
	I0719 12:01:32.171959    4831 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 12:01:32.172092    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1592.pem && ln -fs /usr/share/ca-certificates/1592.pem /etc/ssl/certs/1592.pem"
	I0719 12:01:32.181030    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1592.pem
	I0719 12:01:32.184302    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 18:22 /usr/share/ca-certificates/1592.pem
	I0719 12:01:32.184427    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:22 /usr/share/ca-certificates/1592.pem
	I0719 12:01:32.184466    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1592.pem
	I0719 12:01:32.188487    4831 command_runner.go:130] > 51391683
	I0719 12:01:32.188670    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1592.pem /etc/ssl/certs/51391683.0"
	I0719 12:01:32.197628    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15922.pem && ln -fs /usr/share/ca-certificates/15922.pem /etc/ssl/certs/15922.pem"
	I0719 12:01:32.206800    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15922.pem
	I0719 12:01:32.210082    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 18:22 /usr/share/ca-certificates/15922.pem
	I0719 12:01:32.210167    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:22 /usr/share/ca-certificates/15922.pem
	I0719 12:01:32.210220    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15922.pem
	I0719 12:01:32.214519    4831 command_runner.go:130] > 3ec20f2e
	I0719 12:01:32.214724    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15922.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 12:01:32.224361    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 12:01:32.233959    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:01:32.237294    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:01:32.237400    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:01:32.237438    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:01:32.241505    4831 command_runner.go:130] > b5213941
	I0719 12:01:32.241702    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 12:01:32.250746    4831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 12:01:32.253757    4831 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 12:01:32.253842    4831 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 12:01:32.253876    4831 kubeadm.go:934] updating node {m02 192.169.0.18 8443 v1.30.3 docker false true} ...
	I0719 12:01:32.253936    4831 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-871000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 12:01:32.253975    4831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 12:01:32.261806    4831 command_runner.go:130] > kubeadm
	I0719 12:01:32.261814    4831 command_runner.go:130] > kubectl
	I0719 12:01:32.261817    4831 command_runner.go:130] > kubelet
	I0719 12:01:32.261923    4831 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 12:01:32.261965    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0719 12:01:32.270047    4831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0719 12:01:32.283477    4831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 12:01:32.297060    4831 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0719 12:01:32.299937    4831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 12:01:32.309813    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:32.409600    4831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:01:32.424238    4831 host.go:66] Checking if "multinode-871000" exists ...
	I0719 12:01:32.424546    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.424566    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.433492    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53215
	I0719 12:01:32.433844    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.434194    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.434205    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.434437    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.434560    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:01:32.434657    4831 start.go:317] joinCluster: &{Name:multinode-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.19 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:01:32.434731    4831 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 12:01:32.434770    4831 host.go:66] Checking if "multinode-871000-m02" exists ...
	I0719 12:01:32.435045    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.435069    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.444130    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53217
	I0719 12:01:32.444475    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.444802    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.444813    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.445037    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.445149    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:32.445235    4831 mustload.go:65] Loading cluster: multinode-871000
	I0719 12:01:32.445434    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:32.445651    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.445669    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.454481    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53219
	I0719 12:01:32.454849    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.455206    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.455224    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.455431    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.455536    4831 main.go:141] libmachine: (multinode-871000) Calling .GetState
	I0719 12:01:32.455620    4831 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:32.455696    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid from json: 4843
	I0719 12:01:32.456654    4831 host.go:66] Checking if "multinode-871000" exists ...
	I0719 12:01:32.456914    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.456938    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.465782    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53221
	I0719 12:01:32.466117    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.466453    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.466470    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.466689    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.466808    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:01:32.466890    4831 api_server.go:166] Checking apiserver status ...
	I0719 12:01:32.466943    4831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:01:32.466953    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:01:32.467031    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:01:32.467122    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:01:32.467211    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:01:32.467290    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:01:32.508424    4831 command_runner.go:130] > 1608
	I0719 12:01:32.508522    4831 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1608/cgroup
	W0719 12:01:32.516261    4831 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1608/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 12:01:32.516331    4831 ssh_runner.go:195] Run: ls
	I0719 12:01:32.520049    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:01:32.523301    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0719 12:01:32.523370    4831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-871000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0719 12:01:32.605153    4831 command_runner.go:130] > node/multinode-871000-m02 cordoned
	I0719 12:01:35.622846    4831 command_runner.go:130] > pod "busybox-fc5497c4f-t7lpn" has DeletionTimestamp older than 1 seconds, skipping
	I0719 12:01:35.622859    4831 command_runner.go:130] > node/multinode-871000-m02 drained
	I0719 12:01:35.624380    4831 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-897rz, kube-system/kube-proxy-t9bqq
	I0719 12:01:35.624481    4831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-871000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.101108094s)
	I0719 12:01:35.624490    4831 node.go:128] successfully drained node "multinode-871000-m02"
	I0719 12:01:35.624512    4831 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0719 12:01:35.624530    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:35.624668    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:35.624765    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:35.624854    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:35.624941    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 12:01:35.708033    4831 command_runner.go:130] > [preflight] Running pre-flight checks
	I0719 12:01:35.708391    4831 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0719 12:01:35.708453    4831 command_runner.go:130] > [reset] Stopping the kubelet service
	I0719 12:01:35.714749    4831 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0719 12:01:35.927378    4831 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0719 12:01:35.928149    4831 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0719 12:01:35.928161    4831 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0719 12:01:35.928170    4831 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0719 12:01:35.928176    4831 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0719 12:01:35.928181    4831 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0719 12:01:35.928186    4831 command_runner.go:130] > to reset your system's IPVS tables.
	I0719 12:01:35.928192    4831 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0719 12:01:35.928205    4831 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0719 12:01:35.929009    4831 command_runner.go:130] ! W0719 19:01:35.897039    1350 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0719 12:01:35.929046    4831 command_runner.go:130] ! W0719 19:01:36.115281    1350 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod fa390727edc254d6b9d466a058e2931134bb55963090ecee2afc18bba72c7d10: output: E0719 19:01:36.015602    1379 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-t7lpn_default\" network: cni config uninitialized" podSandboxID="fa390727edc254d6b9d466a058e2931134bb55963090ecee2afc18bba72c7d10"
	I0719 12:01:35.929059    4831 command_runner.go:130] ! time="2024-07-19T19:01:36Z" level=fatal msg="stopping the pod sandbox \"fa390727edc254d6b9d466a058e2931134bb55963090ecee2afc18bba72c7d10\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-t7lpn_default\" network: cni config uninitialized"
	I0719 12:01:35.929063    4831 command_runner.go:130] ! : exit status 1
	I0719 12:01:35.929075    4831 node.go:155] successfully reset node "multinode-871000-m02"
	I0719 12:01:35.929331    4831 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:01:35.929542    4831 kapi.go:59] client config for multinode-871000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xebf8ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 12:01:35.929798    4831 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0719 12:01:35.929828    4831 round_trippers.go:463] DELETE https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:35.929832    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:35.929841    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:35.929845    4831 round_trippers.go:473]     Content-Type: application/json
	I0719 12:01:35.929848    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:35.932548    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:35.932559    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:35.932564    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:35.932567    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:35.932570    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:35.932572    4831 round_trippers.go:580]     Content-Length: 171
	I0719 12:01:35.932577    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:36 GMT
	I0719 12:01:35.932580    4831 round_trippers.go:580]     Audit-Id: 39ce5c03-5c86-4cfe-8e92-cfccfd4d77aa
	I0719 12:01:35.932583    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:35.932593    4831 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-871000-m02","kind":"nodes","uid":"e0450b58-f42e-4eee-a22b-05f89b4b721d"}}
	I0719 12:01:35.932611    4831 node.go:180] successfully deleted node "multinode-871000-m02"
	I0719 12:01:35.932621    4831 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 12:01:35.932640    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 12:01:35.932656    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:01:35.932799    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:01:35.932885    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:01:35.932970    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:01:35.933043    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:01:36.016218    4831 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pth09b.v0t542n3s0kf9m1k --discovery-token-ca-cert-hash sha256:afa13eeacf66fe5a050050bebf5083e6d92babcb46083a82ef00c5e81d9e788a 
	I0719 12:01:36.017205    4831 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 12:01:36.017227    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pth09b.v0t542n3s0kf9m1k --discovery-token-ca-cert-hash sha256:afa13eeacf66fe5a050050bebf5083e6d92babcb46083a82ef00c5e81d9e788a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-871000-m02"
	I0719 12:01:36.050536    4831 command_runner.go:130] > [preflight] Running pre-flight checks
	I0719 12:01:36.156351    4831 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0719 12:01:36.156369    4831 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0719 12:01:36.186848    4831 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 12:01:36.186863    4831 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 12:01:36.186882    4831 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0719 12:01:36.287480    4831 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 12:01:36.789623    4831 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.485895ms
	I0719 12:01:36.789640    4831 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0719 12:01:37.300697    4831 command_runner.go:130] > This node has joined the cluster:
	I0719 12:01:37.300712    4831 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0719 12:01:37.300718    4831 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0719 12:01:37.300723    4831 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0719 12:01:37.302165    4831 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 12:01:37.302235    4831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pth09b.v0t542n3s0kf9m1k --discovery-token-ca-cert-hash sha256:afa13eeacf66fe5a050050bebf5083e6d92babcb46083a82ef00c5e81d9e788a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-871000-m02": (1.284993769s)
	I0719 12:01:37.302253    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 12:01:37.409883    4831 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0719 12:01:37.515502    4831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-871000-m02 minikube.k8s.io/updated_at=2024_07_19T12_01_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=multinode-871000 minikube.k8s.io/primary=false
	I0719 12:01:37.585139    4831 command_runner.go:130] > node/multinode-871000-m02 labeled
	I0719 12:01:37.586417    4831 start.go:319] duration metric: took 5.151775223s to joinCluster
	I0719 12:01:37.586467    4831 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 12:01:37.586660    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:37.606739    4831 out.go:177] * Verifying Kubernetes components...
	I0719 12:01:37.649725    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:37.751645    4831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:01:37.764431    4831 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:01:37.764629    4831 kapi.go:59] client config for multinode-871000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xebf8ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 12:01:37.764804    4831 node_ready.go:35] waiting up to 6m0s for node "multinode-871000-m02" to be "Ready" ...
	I0719 12:01:37.764850    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:37.764855    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:37.764861    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:37.764865    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:37.766434    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:37.766446    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:37.766466    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:37.766478    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:37.766491    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:37.766499    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:37 GMT
	I0719 12:01:37.766510    4831 round_trippers.go:580]     Audit-Id: f7abf541-61fe-49a3-a985-9891f0494517
	I0719 12:01:37.766518    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:37.766721    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1087","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0719 12:01:38.265874    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:38.265887    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:38.265893    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:38.265898    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:38.267485    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:38.267495    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:38.267500    4831 round_trippers.go:580]     Audit-Id: bc869004-cebf-4d35-8b5c-9ed6e4ee6eef
	I0719 12:01:38.267505    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:38.267508    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:38.267511    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:38.267513    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:38.267516    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:38 GMT
	I0719 12:01:38.267630    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:38.764931    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:38.764954    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:38.764961    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:38.764963    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:38.767327    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:38.767340    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:38.767345    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:38.767350    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:38.767362    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:38 GMT
	I0719 12:01:38.767369    4831 round_trippers.go:580]     Audit-Id: c26a5767-263b-444c-997e-0a00c04807d5
	I0719 12:01:38.767374    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:38.767379    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:38.767542    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:39.265019    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:39.265032    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:39.265038    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:39.265042    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:39.266727    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:39.266739    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:39.266747    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:39.266752    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:39 GMT
	I0719 12:01:39.266756    4831 round_trippers.go:580]     Audit-Id: 5fb15f0f-982e-4507-9642-d8d2d9abaeb8
	I0719 12:01:39.266762    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:39.266764    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:39.266767    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:39.266905    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:39.765285    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:39.765306    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:39.765318    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:39.765324    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:39.768076    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:39.768092    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:39.768114    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:39.768121    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:39.768125    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:39.768130    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:39.768135    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:39 GMT
	I0719 12:01:39.768139    4831 round_trippers.go:580]     Audit-Id: 2114d0e2-a351-41f4-bf52-988a5b256300
	I0719 12:01:39.768510    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:39.768756    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:40.265989    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:40.266008    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:40.266020    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:40.266025    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:40.268377    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:40.268398    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:40.268436    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:40.268461    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:40.268472    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:40.268478    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:40.268484    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:40 GMT
	I0719 12:01:40.268494    4831 round_trippers.go:580]     Audit-Id: d3ac57b3-9a60-4301-9fc5-fdce427f1686
	I0719 12:01:40.268599    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:40.765782    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:40.765806    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:40.765819    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:40.765824    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:40.768594    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:40.768613    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:40.768647    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:40.768681    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:40.768687    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:40.768691    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:40.768695    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:40 GMT
	I0719 12:01:40.768699    4831 round_trippers.go:580]     Audit-Id: 45ef635c-47c3-4ee4-b5a1-76eaff193f8b
	I0719 12:01:40.768936    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:41.266054    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:41.266075    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:41.266084    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:41.266093    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:41.268415    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:41.268431    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:41.268439    4831 round_trippers.go:580]     Audit-Id: b7059ccb-980e-49b9-a10a-3d5dccaceb5c
	I0719 12:01:41.268444    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:41.268449    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:41.268464    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:41.268468    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:41.268471    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:41 GMT
	I0719 12:01:41.268634    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:41.766644    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:41.766666    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:41.766675    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:41.766680    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:41.768834    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:41.768848    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:41.768853    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:41.768857    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:41 GMT
	I0719 12:01:41.768860    4831 round_trippers.go:580]     Audit-Id: e0e224f5-ec58-4ecf-af00-5adecd99eda3
	I0719 12:01:41.768863    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:41.768869    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:41.768873    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:41.768983    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:41.769156    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:42.266666    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:42.266689    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:42.266696    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:42.266699    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:42.268252    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:42.268264    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:42.268271    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:42.268276    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:42.268279    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:42.268283    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:42 GMT
	I0719 12:01:42.268285    4831 round_trippers.go:580]     Audit-Id: 340c137c-289d-4c8a-b5c7-ae0b763fc314
	I0719 12:01:42.268289    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:42.268368    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:42.765414    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:42.765430    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:42.765438    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:42.765442    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:42.767314    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:42.767325    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:42.767330    4831 round_trippers.go:580]     Audit-Id: 8580e686-730c-47e6-af94-c9f348ef24fc
	I0719 12:01:42.767333    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:42.767337    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:42.767340    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:42.767343    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:42.767345    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:42 GMT
	I0719 12:01:42.767464    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:43.265515    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:43.265530    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:43.265538    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:43.265542    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:43.267130    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:43.267160    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:43.267166    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:43.267172    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:43 GMT
	I0719 12:01:43.267174    4831 round_trippers.go:580]     Audit-Id: 6da3f2a4-a472-4418-9977-afe5cdc1923c
	I0719 12:01:43.267177    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:43.267185    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:43.267187    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:43.267294    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:43.765367    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:43.765393    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:43.765406    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:43.765412    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:43.768182    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:43.768197    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:43.768205    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:43 GMT
	I0719 12:01:43.768209    4831 round_trippers.go:580]     Audit-Id: 24ce980a-3121-45b3-a46a-050b0025e527
	I0719 12:01:43.768213    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:43.768218    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:43.768221    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:43.768224    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:43.768285    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:44.265772    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:44.265803    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:44.265887    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:44.265900    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:44.268379    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:44.268394    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:44.268401    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:44.268406    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:44.268411    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:44.268418    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:44.268425    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:44 GMT
	I0719 12:01:44.268430    4831 round_trippers.go:580]     Audit-Id: e3b9c9f7-69b1-432b-b77c-6beaa9d21a96
	I0719 12:01:44.268689    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:44.268907    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:44.765090    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:44.765110    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:44.765122    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:44.765129    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:44.767441    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:44.767454    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:44.767490    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:44.767499    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:44 GMT
	I0719 12:01:44.767503    4831 round_trippers.go:580]     Audit-Id: c4a012e5-0d69-4feb-86da-02ac3b9f71d9
	I0719 12:01:44.767509    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:44.767513    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:44.767519    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:44.767819    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:45.264985    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:45.265015    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:45.265028    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:45.265047    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:45.267769    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:45.267781    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:45.267788    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:45.267792    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:45 GMT
	I0719 12:01:45.267796    4831 round_trippers.go:580]     Audit-Id: 91fe7b24-4012-474f-b81a-d98e00c6c0b4
	I0719 12:01:45.267799    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:45.267803    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:45.267809    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:45.268174    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:45.764951    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:45.764964    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:45.764970    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:45.764972    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:45.766670    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:45.766679    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:45.766684    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:45 GMT
	I0719 12:01:45.766687    4831 round_trippers.go:580]     Audit-Id: 3f57bc9c-1895-425e-8f32-7c778ac1127f
	I0719 12:01:45.766690    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:45.766692    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:45.766695    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:45.766698    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:45.766743    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:46.265560    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:46.265587    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:46.265596    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:46.265602    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:46.269862    4831 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 12:01:46.269877    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:46.269884    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:46.269888    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:46.269902    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:46.269906    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:46.269911    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:46 GMT
	I0719 12:01:46.269915    4831 round_trippers.go:580]     Audit-Id: 40b37b98-047c-450f-b821-397b9e73ffb0
	I0719 12:01:46.269978    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:46.270197    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:46.764999    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:46.765011    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:46.765018    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:46.765021    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:46.766713    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:46.766725    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:46.766731    4831 round_trippers.go:580]     Audit-Id: 288bfc2e-d63e-46e9-9c5c-57dc3867758a
	I0719 12:01:46.766734    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:46.766736    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:46.766739    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:46.766742    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:46.766750    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:46 GMT
	I0719 12:01:46.766982    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:47.266637    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:47.266657    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:47.266669    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:47.266675    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:47.269479    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:47.269492    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:47.269499    4831 round_trippers.go:580]     Audit-Id: 91cabe9e-70b8-45e7-90ef-5fc77704a9c2
	I0719 12:01:47.269504    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:47.269508    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:47.269512    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:47.269516    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:47.269521    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:47 GMT
	I0719 12:01:47.269722    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:47.765021    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:47.765044    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:47.765057    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:47.765065    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:47.767754    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:47.767772    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:47.767780    4831 round_trippers.go:580]     Audit-Id: e18f83de-7fef-4570-89eb-0bdf49eabdff
	I0719 12:01:47.767784    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:47.767788    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:47.767846    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:47.767855    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:47.767859    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:47 GMT
	I0719 12:01:47.767924    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:48.266348    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:48.266381    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:48.266388    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:48.266393    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:48.267614    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:48.267624    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:48.267630    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:48.267641    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:48 GMT
	I0719 12:01:48.267645    4831 round_trippers.go:580]     Audit-Id: c60994e0-3273-44f0-9b1f-fdb43c3b91ff
	I0719 12:01:48.267647    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:48.267650    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:48.267652    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:48.267923    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:48.766000    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:48.766021    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:48.766034    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:48.766041    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:48.768309    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:48.768322    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:48.768329    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:48.768334    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:48.768372    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:48.768380    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:48.768384    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:48 GMT
	I0719 12:01:48.768388    4831 round_trippers.go:580]     Audit-Id: a74a3bf0-4931-4d9b-bec9-86a67668fe03
	I0719 12:01:48.768624    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:48.768851    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:49.265151    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:49.265173    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:49.265185    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:49.265191    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:49.268093    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:49.268112    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:49.268124    4831 round_trippers.go:580]     Audit-Id: 2058fe3a-ebe9-4fcd-a764-3672fdd77552
	I0719 12:01:49.268132    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:49.268140    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:49.268145    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:49.268150    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:49.268154    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:49 GMT
	I0719 12:01:49.268309    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:49.766279    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:49.766312    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:49.766324    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:49.766338    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:49.768937    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:49.768954    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:49.768961    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:49.768965    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:49.768968    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:49 GMT
	I0719 12:01:49.768973    4831 round_trippers.go:580]     Audit-Id: 6b0e57a4-5b0a-4aaa-b980-9cd99b7c0667
	I0719 12:01:49.768979    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:49.768982    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:49.769050    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:50.265661    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:50.265685    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:50.265697    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:50.265703    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:50.268658    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:50.268675    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:50.268682    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:50.268687    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:50 GMT
	I0719 12:01:50.268691    4831 round_trippers.go:580]     Audit-Id: ae59334d-22b0-4b64-a826-0df64953b4cd
	I0719 12:01:50.268695    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:50.268698    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:50.268702    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:50.269379    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:50.764951    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:50.764964    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:50.764969    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:50.764986    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:50.766535    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:50.766545    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:50.766550    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:50.766553    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:50.766556    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:50.766559    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:50 GMT
	I0719 12:01:50.766573    4831 round_trippers.go:580]     Audit-Id: 710225d9-6f59-4f1f-84a4-e01469a3682c
	I0719 12:01:50.766578    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:50.766723    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:51.265932    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:51.265952    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:51.265972    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:51.265978    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:51.268141    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:51.268157    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:51.268165    4831 round_trippers.go:580]     Audit-Id: 16f84d0e-6391-4cee-8f4b-507ac564ac4c
	I0719 12:01:51.268169    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:51.268174    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:51.268179    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:51.268187    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:51.268191    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:51 GMT
	I0719 12:01:51.268324    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:51.268570    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:51.765832    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:51.765854    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:51.765866    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:51.765873    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:51.768567    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:51.768580    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:51.768587    4831 round_trippers.go:580]     Audit-Id: 71e78895-27d0-40b7-923a-177c5af8be35
	I0719 12:01:51.768593    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:51.768596    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:51.768599    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:51.768622    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:51.768631    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:51 GMT
	I0719 12:01:51.769068    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:52.265410    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:52.265436    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.265513    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.265525    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.267784    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:52.267802    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.267820    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.267826    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.267834    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.267838    4831 round_trippers.go:580]     Audit-Id: 9999b67b-c754-4736-838b-505e58406082
	I0719 12:01:52.267841    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.267844    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.267908    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1134","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0719 12:01:52.268141    4831 node_ready.go:49] node "multinode-871000-m02" has status "Ready":"True"
	I0719 12:01:52.268151    4831 node_ready.go:38] duration metric: took 14.503383738s for node "multinode-871000-m02" to be "Ready" ...
	I0719 12:01:52.268159    4831 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:01:52.268198    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:52.268206    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.268213    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.268218    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.270589    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:52.270606    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.270617    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.270629    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.270640    4831 round_trippers.go:580]     Audit-Id: 5bd94b47-fb60-4f3e-a0d8-8b3573293b04
	I0719 12:01:52.270659    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.270665    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.270674    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.271354    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1134"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"1037","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86445 chars]
	I0719 12:01:52.273219    4831 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.273253    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:52.273257    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.273262    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.273268    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.274481    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:52.274490    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.274507    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.274515    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.274518    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.274521    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.274525    4831 round_trippers.go:580]     Audit-Id: 4e6a1dda-7025-40a9-8dbb-8aff01d72511
	I0719 12:01:52.274528    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.274595    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"1037","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0719 12:01:52.274831    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:52.274839    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.274844    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.274849    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.275893    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:52.275901    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.275911    4831 round_trippers.go:580]     Audit-Id: 5caee3fd-28b0-4404-811c-7a58de2da195
	I0719 12:01:52.275916    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.275921    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.275926    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.275932    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.275937    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.276038    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:52.276204    4831 pod_ready.go:92] pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:52.276212    4831 pod_ready.go:81] duration metric: took 2.983057ms for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.276218    4831 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.276249    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-871000
	I0719 12:01:52.276254    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.276259    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.276264    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.277157    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:52.277164    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.277169    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.277172    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.277174    4831 round_trippers.go:580]     Audit-Id: 71c4c88f-d401-43fe-88bc-540772973797
	I0719 12:01:52.277176    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.277179    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.277181    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.277316    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-871000","namespace":"kube-system","uid":"8818ed52-4b2d-4629-af02-b835e3cfa034","resourceVersion":"1020","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.mirror":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.seen":"2024-07-19T18:55:05.740545259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0719 12:01:52.277529    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:52.277536    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.277541    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.277544    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.278676    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:52.278684    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.278688    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.278691    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.278709    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.278728    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.278733    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.278736    4831 round_trippers.go:580]     Audit-Id: 3c6b0076-0a4d-4f0a-88ee-b7f12cc3d3fe
	I0719 12:01:52.278872    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:52.279045    4831 pod_ready.go:92] pod "etcd-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:52.279053    4831 pod_ready.go:81] duration metric: took 2.830582ms for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.279063    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.279097    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-871000
	I0719 12:01:52.279102    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.279107    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.279111    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.279995    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:52.280002    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.280008    4831 round_trippers.go:580]     Audit-Id: cdc2aab4-c9fa-4cd3-b5ec-c5ecc59b279e
	I0719 12:01:52.280016    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.280019    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.280022    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.280025    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.280028    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.280327    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-871000","namespace":"kube-system","uid":"9f3fdf92-3cbd-411c-802e-cbbbe1b60d68","resourceVersion":"993","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.mirror":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548209Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0719 12:01:52.280566    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:52.280573    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.280579    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.280584    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.281573    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:52.281580    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.281587    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.281592    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.281595    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.281600    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.281604    4831 round_trippers.go:580]     Audit-Id: af92d4c3-07ce-47ed-a287-2c1f4da9f9e1
	I0719 12:01:52.281607    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.281712    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:52.281882    4831 pod_ready.go:92] pod "kube-apiserver-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:52.281889    4831 pod_ready.go:81] duration metric: took 2.821128ms for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.281895    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.281928    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-871000
	I0719 12:01:52.281936    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.281941    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.281945    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.282956    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:52.282962    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.282967    4831 round_trippers.go:580]     Audit-Id: 37b3d6f6-d5cb-41bb-bb6a-0314d8dae796
	I0719 12:01:52.282970    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.282974    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.282979    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.282983    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.282986    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.283119    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-871000","namespace":"kube-system","uid":"74e143fb-26b8-4d1d-b07a-f1b2c590133f","resourceVersion":"1003","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.mirror":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548943Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0719 12:01:52.283339    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:52.283346    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.283351    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.283355    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.284256    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:52.284262    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.284267    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.284271    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.284275    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.284280    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.284284    4831 round_trippers.go:580]     Audit-Id: 289b264b-8fe4-44bb-a7ac-cfbefde406df
	I0719 12:01:52.284287    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.284403    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:52.284570    4831 pod_ready.go:92] pod "kube-controller-manager-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:52.284577    4831 pod_ready.go:81] duration metric: took 2.676992ms for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.284584    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.466629    4831 request.go:629] Waited for 181.914475ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-86ssb
	I0719 12:01:52.466686    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-86ssb
	I0719 12:01:52.466695    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.466706    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.466715    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.469198    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:52.469214    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.469225    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.469231    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.469236    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.469240    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.469245    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.469249    4831 round_trippers.go:580]     Audit-Id: 429dceb6-e86e-4018-b556-14b2a2f022b2
	I0719 12:01:52.469457    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-86ssb","generateName":"kube-proxy-","namespace":"kube-system","uid":"37609942-98d8-4c6b-b339-53bf3a901e3f","resourceVersion":"1128","creationTimestamp":"2024-07-19T18:57:03Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0719 12:01:52.666944    4831 request.go:629] Waited for 197.144047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m03
	I0719 12:01:52.667016    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m03
	I0719 12:01:52.667030    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.667046    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.667057    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.669979    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:52.669997    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.670008    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.670013    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.670018    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.670022    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.670045    4831 round_trippers.go:580]     Audit-Id: b275a4c2-4933-457c-b794-8cc1c82f8ff3
	I0719 12:01:52.670053    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.670190    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m03","uid":"4745805a-e01a-4411-b942-abcd092662c6","resourceVersion":"1125","creationTimestamp":"2024-07-19T18:59:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T11_59_53_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:59:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4301 chars]
	I0719 12:01:52.670429    4831 pod_ready.go:97] node "multinode-871000-m03" hosting pod "kube-proxy-86ssb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000-m03" has status "Ready":"Unknown"
	I0719 12:01:52.670443    4831 pod_ready.go:81] duration metric: took 385.855231ms for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	E0719 12:01:52.670470    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000-m03" hosting pod "kube-proxy-86ssb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000-m03" has status "Ready":"Unknown"
	I0719 12:01:52.670488    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.865457    4831 request.go:629] Waited for 194.920336ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:01:52.865509    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:01:52.865515    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.865521    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.865525    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.869315    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:01:52.869326    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.869343    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.869346    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.869350    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.869352    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.869356    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:53 GMT
	I0719 12:01:52.869359    4831 round_trippers.go:580]     Audit-Id: 6d8a42fa-5f07-4ba5-b901-8d8df07718db
	I0719 12:01:52.869579    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-89hm2","generateName":"kube-proxy-","namespace":"kube-system","uid":"77b4b485-53f0-4480-8b62-a1df26f037b8","resourceVersion":"979","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0719 12:01:53.066786    4831 request.go:629] Waited for 196.934315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:53.066915    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:53.066927    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:53.066938    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:53.066947    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:53.069488    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:53.069505    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:53.069512    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:53.069518    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:53.069531    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:53 GMT
	I0719 12:01:53.069536    4831 round_trippers.go:580]     Audit-Id: 29f7b1f1-8716-41e3-a3fb-e5054c127035
	I0719 12:01:53.069540    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:53.069543    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:53.069761    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:53.070013    4831 pod_ready.go:92] pod "kube-proxy-89hm2" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:53.070025    4831 pod_ready.go:81] duration metric: took 399.524656ms for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:53.070034    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:53.266326    4831 request.go:629] Waited for 196.226835ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:01:53.266366    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:01:53.266372    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:53.266380    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:53.266384    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:53.267734    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:53.267743    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:53.267748    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:53.267751    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:53.267754    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:53.267756    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:53.267759    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:53 GMT
	I0719 12:01:53.267762    4831 round_trippers.go:580]     Audit-Id: b4ba6119-891d-44da-b73d-627e20735b34
	I0719 12:01:53.267908    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t9bqq","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ef191fc-6e2e-486c-b825-76c6e0d95416","resourceVersion":"1107","creationTimestamp":"2024-07-19T18:56:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:56:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0719 12:01:53.466322    4831 request.go:629] Waited for 198.087992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:53.466383    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:53.466393    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:53.466402    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:53.466438    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:53.469092    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:53.469109    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:53.469116    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:53.469126    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:53 GMT
	I0719 12:01:53.469130    4831 round_trippers.go:580]     Audit-Id: 6e4d112e-e44b-4be7-9c97-17550fbf549f
	I0719 12:01:53.469133    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:53.469136    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:53.469139    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:53.469255    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1135","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0719 12:01:53.469467    4831 pod_ready.go:92] pod "kube-proxy-t9bqq" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:53.469477    4831 pod_ready.go:81] duration metric: took 399.439151ms for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:53.469486    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:53.667108    4831 request.go:629] Waited for 197.550527ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:01:53.667258    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:01:53.667270    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:53.667282    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:53.667290    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:53.669963    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:53.669979    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:53.669986    4831 round_trippers.go:580]     Audit-Id: df666b36-6b90-4074-890f-104b4903ef39
	I0719 12:01:53.670018    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:53.670026    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:53.670029    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:53.670046    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:53.670051    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:53 GMT
	I0719 12:01:53.670168    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-871000","namespace":"kube-system","uid":"0d73182a-0458-470e-ac06-ccde27fa113a","resourceVersion":"1012","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.mirror":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.seen":"2024-07-19T18:55:00.040869314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0719 12:01:53.866263    4831 request.go:629] Waited for 195.758116ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:53.866381    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:53.866388    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:53.866398    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:53.866406    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:53.869123    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:53.869138    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:53.869145    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:53.869150    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:53.869155    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:53.869158    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:54 GMT
	I0719 12:01:53.869162    4831 round_trippers.go:580]     Audit-Id: 5ab4535d-1732-458f-bd9e-5973cb44efb7
	I0719 12:01:53.869165    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:53.869396    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:53.869653    4831 pod_ready.go:92] pod "kube-scheduler-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:53.869670    4831 pod_ready.go:81] duration metric: took 400.174664ms for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:53.869679    4831 pod_ready.go:38] duration metric: took 1.601516924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:01:53.869700    4831 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 12:01:53.869770    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 12:01:53.880607    4831 system_svc.go:56] duration metric: took 10.90301ms WaitForService to wait for kubelet
	I0719 12:01:53.880632    4831 kubeadm.go:582] duration metric: took 16.294192012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:01:53.880647    4831 node_conditions.go:102] verifying NodePressure condition ...
	I0719 12:01:54.066856    4831 request.go:629] Waited for 186.137672ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0719 12:01:54.066970    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0719 12:01:54.066980    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:54.066991    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:54.066999    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:54.069828    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:54.069845    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:54.069852    4831 round_trippers.go:580]     Audit-Id: 032d2722-6758-4f22-b522-865e94a62ee3
	I0719 12:01:54.069856    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:54.069860    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:54.069864    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:54.069868    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:54.069874    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:54 GMT
	I0719 12:01:54.070339    4831 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1137"},"items":[{"metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15421 chars]
	I0719 12:01:54.070883    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:54.070895    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:54.070902    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:54.070906    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:54.070910    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:54.070931    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:54.070942    4831 node_conditions.go:105] duration metric: took 190.291261ms to run NodePressure ...
	I0719 12:01:54.070955    4831 start.go:241] waiting for startup goroutines ...
	I0719 12:01:54.070981    4831 start.go:255] writing updated cluster config ...
	I0719 12:01:54.093124    4831 out.go:177] 
	I0719 12:01:54.115152    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:54.115281    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:54.137731    4831 out.go:177] * Starting "multinode-871000-m03" worker node in "multinode-871000" cluster
	I0719 12:01:54.180697    4831 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:01:54.180719    4831 cache.go:56] Caching tarball of preloaded images
	I0719 12:01:54.180837    4831 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 12:01:54.180846    4831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:01:54.180923    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:54.181476    4831 start.go:360] acquireMachinesLock for multinode-871000-m03: {Name:mk9f33e92e6d472bd2fb7a1dc1c9d72253ce59c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:01:54.181527    4831 start.go:364] duration metric: took 34.731µs to acquireMachinesLock for "multinode-871000-m03"
	I0719 12:01:54.181541    4831 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:01:54.181546    4831 fix.go:54] fixHost starting: m03
	I0719 12:01:54.181769    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:54.181782    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:54.190693    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53227
	I0719 12:01:54.191078    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:54.191409    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:54.191427    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:54.191664    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:54.191806    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:01:54.191900    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetState
	I0719 12:01:54.191983    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:54.192077    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | hyperkit pid from json: 4511
	I0719 12:01:54.192997    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | hyperkit pid 4511 missing from process table
	I0719 12:01:54.193036    4831 fix.go:112] recreateIfNeeded on multinode-871000-m03: state=Stopped err=<nil>
	I0719 12:01:54.193050    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	W0719 12:01:54.193163    4831 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:01:54.214531    4831 out.go:177] * Restarting existing hyperkit VM for "multinode-871000-m03" ...
	I0719 12:01:54.256856    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .Start
	I0719 12:01:54.257173    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:54.257206    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/hyperkit.pid
	I0719 12:01:54.257309    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Using UUID f7120658-3396-42ae-acb1-8416661a4529
	I0719 12:01:54.284634    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Generated MAC 5e:a3:f5:89:e4:9e
	I0719 12:01:54.284657    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000
	I0719 12:01:54.284793    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7120658-3396-42ae-acb1-8416661a4529", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b7a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0719 12:01:54.284824    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7120658-3396-42ae-acb1-8416661a4529", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b7a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0719 12:01:54.284902    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7120658-3396-42ae-acb1-8416661a4529", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/multinode-871000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/bzimage,/Users/j
enkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"}
	I0719 12:01:54.284935    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7120658-3396-42ae-acb1-8416661a4529 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/multinode-871000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/bzimage,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/mult
inode-871000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"
	I0719 12:01:54.284960    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0719 12:01:54.286483    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: Pid is 4868
	I0719 12:01:54.287010    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Attempt 0
	I0719 12:01:54.287032    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:54.287126    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | hyperkit pid from json: 4868
	I0719 12:01:54.288216    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Searching for 5e:a3:f5:89:e4:9e in /var/db/dhcpd_leases ...
	I0719 12:01:54.288299    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0719 12:01:54.288315    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:36:3f:5c:47:18:4c ID:1,36:3f:5c:47:18:4c Lease:0x669c0983}
	I0719 12:01:54.288341    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:4c:c6:88:73:ec ID:1,f2:4c:c6:88:73:ec Lease:0x669c0959}
	I0719 12:01:54.288356    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:5e:a3:f5:89:e4:9e ID:1,5e:a3:f5:89:e4:9e Lease:0x669ab7be}
	I0719 12:01:54.288369    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Found match: 5e:a3:f5:89:e4:9e
	I0719 12:01:54.288381    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | IP: 192.169.0.19
	I0719 12:01:54.288429    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetConfigRaw
	I0719 12:01:54.289107    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetIP
	I0719 12:01:54.289292    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:54.289738    4831 machine.go:94] provisionDockerMachine start ...
	I0719 12:01:54.289749    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:01:54.289886    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:01:54.290004    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:01:54.290104    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:01:54.290216    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:01:54.290300    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:01:54.290421    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:54.290589    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:01:54.290597    4831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 12:01:54.293969    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0719 12:01:54.302088    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0719 12:01:54.303180    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:01:54.303201    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:01:54.303211    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:01:54.303221    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:01:54.682894    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0719 12:01:54.682910    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0719 12:01:54.797679    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:01:54.797695    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:01:54.797703    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:01:54.797713    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:01:54.798562    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0719 12:01:54.798576    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0719 12:02:00.065970    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:02:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0719 12:02:00.066043    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:02:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0719 12:02:00.066053    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:02:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0719 12:02:00.089773    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:02:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0719 12:02:29.359756    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 12:02:29.359774    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetMachineName
	I0719 12:02:29.359894    4831 buildroot.go:166] provisioning hostname "multinode-871000-m03"
	I0719 12:02:29.359902    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetMachineName
	I0719 12:02:29.360006    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.360091    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.360185    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.360263    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.360358    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.360484    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:29.360658    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:29.360668    4831 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-871000-m03 && echo "multinode-871000-m03" | sudo tee /etc/hostname
	I0719 12:02:29.431846    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-871000-m03
	
	I0719 12:02:29.431861    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.432004    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.432106    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.432218    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.432318    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.432436    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:29.432574    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:29.432587    4831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-871000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-871000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-871000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 12:02:29.498792    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 12:02:29.498812    4831 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19307-1053/.minikube CaCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19307-1053/.minikube}
	I0719 12:02:29.498823    4831 buildroot.go:174] setting up certificates
	I0719 12:02:29.498829    4831 provision.go:84] configureAuth start
	I0719 12:02:29.498837    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetMachineName
	I0719 12:02:29.498967    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetIP
	I0719 12:02:29.499068    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.499151    4831 provision.go:143] copyHostCerts
	I0719 12:02:29.499179    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:02:29.499239    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem, removing ...
	I0719 12:02:29.499245    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:02:29.499381    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem (1123 bytes)
	I0719 12:02:29.499598    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:02:29.499639    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem, removing ...
	I0719 12:02:29.499644    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:02:29.499762    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem (1675 bytes)
	I0719 12:02:29.499934    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:02:29.499980    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem, removing ...
	I0719 12:02:29.499985    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:02:29.500065    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem (1078 bytes)
	I0719 12:02:29.500222    4831 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem org=jenkins.multinode-871000-m03 san=[127.0.0.1 192.169.0.19 localhost minikube multinode-871000-m03]
	I0719 12:02:29.645278    4831 provision.go:177] copyRemoteCerts
	I0719 12:02:29.645324    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 12:02:29.645339    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.645484    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.645585    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.645676    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.645763    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/id_rsa Username:docker}
	I0719 12:02:29.682917    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 12:02:29.682996    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 12:02:29.702497    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 12:02:29.702567    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0719 12:02:29.722047    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 12:02:29.722114    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 12:02:29.741874    4831 provision.go:87] duration metric: took 243.037708ms to configureAuth
	I0719 12:02:29.741888    4831 buildroot.go:189] setting minikube options for container-runtime
	I0719 12:02:29.742066    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:02:29.742080    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:29.742229    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.742333    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.742417    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.742507    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.742593    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.742699    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:29.742837    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:29.742846    4831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 12:02:29.803807    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 12:02:29.803823    4831 buildroot.go:70] root file system type: tmpfs
	I0719 12:02:29.803905    4831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 12:02:29.803915    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.804045    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.804139    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.804214    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.804302    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.804417    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:29.804564    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:29.804617    4831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.16"
	Environment="NO_PROXY=192.169.0.16,192.169.0.18"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 12:02:29.875877    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.16
	Environment=NO_PROXY=192.169.0.16,192.169.0.18
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 12:02:29.875896    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.876021    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.876125    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.876208    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.876292    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.876424    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:29.876575    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:29.876590    4831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 12:02:31.460754    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 12:02:31.460767    4831 machine.go:97] duration metric: took 37.171139772s to provisionDockerMachine
	I0719 12:02:31.460777    4831 start.go:293] postStartSetup for "multinode-871000-m03" (driver="hyperkit")
	I0719 12:02:31.460790    4831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 12:02:31.460801    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:31.460988    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 12:02:31.461003    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:31.461092    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:31.461178    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:31.461265    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:31.461358    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/id_rsa Username:docker}
	I0719 12:02:31.497118    4831 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 12:02:31.500034    4831 command_runner.go:130] > NAME=Buildroot
	I0719 12:02:31.500042    4831 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 12:02:31.500045    4831 command_runner.go:130] > ID=buildroot
	I0719 12:02:31.500049    4831 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 12:02:31.500053    4831 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 12:02:31.500192    4831 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 12:02:31.500199    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/addons for local assets ...
	I0719 12:02:31.500297    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/files for local assets ...
	I0719 12:02:31.500478    4831 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> 15922.pem in /etc/ssl/certs
	I0719 12:02:31.500488    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> /etc/ssl/certs/15922.pem
	I0719 12:02:31.500693    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 12:02:31.507900    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem --> /etc/ssl/certs/15922.pem (1708 bytes)
	I0719 12:02:31.527754    4831 start.go:296] duration metric: took 66.968583ms for postStartSetup
	I0719 12:02:31.527774    4831 fix.go:56] duration metric: took 37.346347633s for fixHost
	I0719 12:02:31.527790    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:31.527920    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:31.528025    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:31.528116    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:31.528197    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:31.528319    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:31.528466    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:31.528474    4831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 12:02:31.588013    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415751.783803050
	
	I0719 12:02:31.588028    4831 fix.go:216] guest clock: 1721415751.783803050
	I0719 12:02:31.588034    4831 fix.go:229] Guest: 2024-07-19 12:02:31.78380305 -0700 PDT Remote: 2024-07-19 12:02:31.52778 -0700 PDT m=+119.162745825 (delta=256.02305ms)
	I0719 12:02:31.588048    4831 fix.go:200] guest clock delta is within tolerance: 256.02305ms
	I0719 12:02:31.588053    4831 start.go:83] releasing machines lock for "multinode-871000-m03", held for 37.406637946s
	I0719 12:02:31.588067    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:31.588193    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetIP
	I0719 12:02:31.609709    4831 out.go:177] * Found network options:
	I0719 12:02:31.631811    4831 out.go:177]   - NO_PROXY=192.169.0.16,192.169.0.18
	W0719 12:02:31.653553    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 12:02:31.653587    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 12:02:31.653606    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:31.654506    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:31.654807    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:31.654947    4831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 12:02:31.654991    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	W0719 12:02:31.655084    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 12:02:31.655117    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 12:02:31.655203    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:31.655207    4831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 12:02:31.655271    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:31.655389    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:31.655434    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:31.655606    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:31.655635    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:31.655793    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/id_rsa Username:docker}
	I0719 12:02:31.655804    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:31.655935    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/id_rsa Username:docker}
	I0719 12:02:31.690038    4831 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 12:02:31.690089    4831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 12:02:31.690153    4831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 12:02:31.738898    4831 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 12:02:31.739073    4831 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 12:02:31.739115    4831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 12:02:31.739132    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:02:31.739257    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:02:31.755356    4831 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 12:02:31.755645    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 12:02:31.764199    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 12:02:31.773939    4831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 12:02:31.773998    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 12:02:31.782302    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:02:31.790388    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 12:02:31.798379    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:02:31.806750    4831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 12:02:31.816315    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 12:02:31.825398    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 12:02:31.834304    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 12:02:31.843358    4831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 12:02:31.851357    4831 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 12:02:31.851516    4831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 12:02:31.860150    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:02:31.955825    4831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 12:02:31.974599    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:02:31.974665    4831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 12:02:31.989878    4831 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 12:02:31.990301    4831 command_runner.go:130] > [Unit]
	I0719 12:02:31.990311    4831 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 12:02:31.990318    4831 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 12:02:31.990323    4831 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 12:02:31.990328    4831 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 12:02:31.990333    4831 command_runner.go:130] > StartLimitBurst=3
	I0719 12:02:31.990337    4831 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 12:02:31.990340    4831 command_runner.go:130] > [Service]
	I0719 12:02:31.990343    4831 command_runner.go:130] > Type=notify
	I0719 12:02:31.990347    4831 command_runner.go:130] > Restart=on-failure
	I0719 12:02:31.990352    4831 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16
	I0719 12:02:31.990356    4831 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16,192.169.0.18
	I0719 12:02:31.990364    4831 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 12:02:31.990371    4831 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 12:02:31.990377    4831 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 12:02:31.990383    4831 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 12:02:31.990388    4831 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 12:02:31.990394    4831 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 12:02:31.990403    4831 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 12:02:31.990409    4831 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 12:02:31.990415    4831 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 12:02:31.990418    4831 command_runner.go:130] > ExecStart=
	I0719 12:02:31.990430    4831 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0719 12:02:31.990435    4831 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 12:02:31.990441    4831 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 12:02:31.990446    4831 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 12:02:31.990450    4831 command_runner.go:130] > LimitNOFILE=infinity
	I0719 12:02:31.990453    4831 command_runner.go:130] > LimitNPROC=infinity
	I0719 12:02:31.990456    4831 command_runner.go:130] > LimitCORE=infinity
	I0719 12:02:31.990464    4831 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 12:02:31.990469    4831 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 12:02:31.990488    4831 command_runner.go:130] > TasksMax=infinity
	I0719 12:02:31.990495    4831 command_runner.go:130] > TimeoutStartSec=0
	I0719 12:02:31.990501    4831 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 12:02:31.990505    4831 command_runner.go:130] > Delegate=yes
	I0719 12:02:31.990521    4831 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 12:02:31.990527    4831 command_runner.go:130] > KillMode=process
	I0719 12:02:31.990532    4831 command_runner.go:130] > [Install]
	I0719 12:02:31.990538    4831 command_runner.go:130] > WantedBy=multi-user.target
	I0719 12:02:31.990604    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:02:32.002685    4831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 12:02:32.021379    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:02:32.031904    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:02:32.047913    4831 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 12:02:32.066032    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:02:32.076476    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:02:32.091156    4831 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 12:02:32.091445    4831 ssh_runner.go:195] Run: which cri-dockerd
	I0719 12:02:32.094387    4831 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 12:02:32.094573    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 12:02:32.101884    4831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 12:02:32.115549    4831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 12:02:32.212114    4831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 12:02:32.324274    4831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 12:02:32.324305    4831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 12:02:32.338049    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:02:32.429800    4831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 12:03:33.477808    4831 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0719 12:03:33.477824    4831 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0719 12:03:33.477834    4831 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.048215261s)
	I0719 12:03:33.477889    4831 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 12:03:33.487534    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0719 12:03:33.487548    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.489039182Z" level=info msg="Starting up"
	I0719 12:03:33.487561    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.489485651Z" level=info msg="containerd not running, starting managed containerd"
	I0719 12:03:33.487573    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.490106672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	I0719 12:03:33.487582    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.504729944Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 12:03:33.487592    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519842957Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 12:03:33.487605    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519924102Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 12:03:33.487614    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519989972Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 12:03:33.487623    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520025226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487634    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520192589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 12:03:33.487644    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520242309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487666    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520383559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 12:03:33.487675    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520429744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487687    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520463815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 12:03:33.487699    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520494329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487709    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520622328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487718    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520824297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487731    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522368920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 12:03:33.487741    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522413855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487841    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522541465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 12:03:33.487858    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522582111Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 12:03:33.487869    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522705501Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 12:03:33.487877    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522755283Z" level=info msg="metadata content store policy set" policy=shared
	I0719 12:03:33.487886    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524108114Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 12:03:33.487895    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524211538Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 12:03:33.487904    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524258430Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 12:03:33.487913    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524359849Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 12:03:33.487921    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524403870Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 12:03:33.487932    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524475611Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 12:03:33.487941    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524693533Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 12:03:33.487950    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524857653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 12:03:33.487961    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524902532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 12:03:33.487971    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524935305Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 12:03:33.487983    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524974256Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.487994    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525010368Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488004    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525041413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488013    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525072409Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488023    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525104745Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488032    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525139114Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488111    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525170076Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488125    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525200241Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488137    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525237119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488146    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525272787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488155    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525304916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488163    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525339108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488172    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525371160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488181    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525406650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488189    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525439163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488198    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525469499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488207    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525502037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488218    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525533873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488227    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525563372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488236    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525592721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488244    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525622341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488253    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525653422Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 12:03:33.488261    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525690287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488270    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525721827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488279    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525751498Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 12:03:33.488288    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525806277Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 12:03:33.488299    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525842248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 12:03:33.488309    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525874949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 12:03:33.488456    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525905187Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 12:03:33.488467    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525935128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488478    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526093302Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 12:03:33.488486    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526134238Z" level=info msg="NRI interface is disabled by configuration."
	I0719 12:03:33.488494    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526368235Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 12:03:33.488502    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526492146Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 12:03:33.488510    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526555812Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 12:03:33.488517    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526592041Z" level=info msg="containerd successfully booted in 0.022526s"
	I0719 12:03:33.488525    4831 command_runner.go:130] > Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.512068043Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 12:03:33.488533    4831 command_runner.go:130] > Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.521019942Z" level=info msg="Loading containers: start."
	I0719 12:03:33.488551    4831 command_runner.go:130] > Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.616685011Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 12:03:33.488562    4831 command_runner.go:130] > Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.681522031Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 12:03:33.488570    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.614445200Z" level=info msg="Loading containers: done."
	I0719 12:03:33.488579    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.631575085Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 12:03:33.488587    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.631860425Z" level=info msg="Daemon has completed initialization"
	I0719 12:03:33.488594    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.655164938Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 12:03:33.488602    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.655239665Z" level=info msg="API listen on [::]:2376"
	I0719 12:03:33.488607    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 systemd[1]: Started Docker Application Container Engine.
	I0719 12:03:33.488614    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.638339545Z" level=info msg="Processing signal 'terminated'"
	I0719 12:03:33.488619    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 systemd[1]: Stopping Docker Application Container Engine...
	I0719 12:03:33.488629    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639494009Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 12:03:33.488640    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639765769Z" level=info msg="Daemon shutdown complete"
	I0719 12:03:33.488648    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639870632Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 12:03:33.488681    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.640041119Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 12:03:33.488687    4831 command_runner.go:130] > Jul 19 19:02:33 multinode-871000-m03 systemd[1]: docker.service: Deactivated successfully.
	I0719 12:03:33.488696    4831 command_runner.go:130] > Jul 19 19:02:33 multinode-871000-m03 systemd[1]: Stopped Docker Application Container Engine.
	I0719 12:03:33.488701    4831 command_runner.go:130] > Jul 19 19:02:33 multinode-871000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0719 12:03:33.488709    4831 command_runner.go:130] > Jul 19 19:02:33 multinode-871000-m03 dockerd[846]: time="2024-07-19T19:02:33.684394739Z" level=info msg="Starting up"
	I0719 12:03:33.488719    4831 command_runner.go:130] > Jul 19 19:03:33 multinode-871000-m03 dockerd[846]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0719 12:03:33.488728    4831 command_runner.go:130] > Jul 19 19:03:33 multinode-871000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0719 12:03:33.488734    4831 command_runner.go:130] > Jul 19 19:03:33 multinode-871000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0719 12:03:33.488741    4831 command_runner.go:130] > Jul 19 19:03:33 multinode-871000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	I0719 12:03:33.513298    4831 out.go:177] 
	W0719 12:03:33.535237    4831 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 19:02:29 multinode-871000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.489039182Z" level=info msg="Starting up"
	Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.489485651Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.490106672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.504729944Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519842957Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519924102Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519989972Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520025226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520192589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520242309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520383559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520429744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520463815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520494329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520622328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520824297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522368920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522413855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522541465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522582111Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522705501Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522755283Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524108114Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524211538Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524258430Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524359849Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524403870Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524475611Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524693533Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524857653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524902532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524935305Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524974256Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525010368Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525041413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525072409Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525104745Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525139114Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525170076Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525200241Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525237119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525272787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525304916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525339108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525371160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525406650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525439163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525469499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525502037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525533873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525563372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525592721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525622341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525653422Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525690287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525721827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525751498Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525806277Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525842248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525874949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525905187Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525935128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526093302Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526134238Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526368235Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526492146Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526555812Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526592041Z" level=info msg="containerd successfully booted in 0.022526s"
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.512068043Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.521019942Z" level=info msg="Loading containers: start."
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.616685011Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.681522031Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.614445200Z" level=info msg="Loading containers: done."
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.631575085Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.631860425Z" level=info msg="Daemon has completed initialization"
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.655164938Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.655239665Z" level=info msg="API listen on [::]:2376"
	Jul 19 19:02:31 multinode-871000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.638339545Z" level=info msg="Processing signal 'terminated'"
	Jul 19 19:02:32 multinode-871000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639494009Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639765769Z" level=info msg="Daemon shutdown complete"
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639870632Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.640041119Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 19:02:33 multinode-871000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 19:02:33 multinode-871000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 19:02:33 multinode-871000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 19:02:33 multinode-871000-m03 dockerd[846]: time="2024-07-19T19:02:33.684394739Z" level=info msg="Starting up"
	Jul 19 19:03:33 multinode-871000-m03 dockerd[846]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 19:03:33 multinode-871000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 19:03:33 multinode-871000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 19:03:33 multinode-871000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 19:02:29 multinode-871000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.489039182Z" level=info msg="Starting up"
	Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.489485651Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.490106672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.504729944Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519842957Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519924102Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519989972Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520025226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520192589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520242309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520383559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520429744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520463815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520494329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520622328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520824297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522368920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522413855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522541465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522582111Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522705501Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522755283Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524108114Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524211538Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524258430Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524359849Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524403870Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524475611Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524693533Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524857653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524902532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524935305Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524974256Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525010368Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525041413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525072409Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525104745Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525139114Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525170076Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525200241Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525237119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525272787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525304916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525339108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525371160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525406650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525439163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525469499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525502037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525533873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525563372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525592721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525622341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525653422Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525690287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525721827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525751498Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525806277Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525842248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525874949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525905187Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525935128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526093302Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526134238Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526368235Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526492146Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526555812Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526592041Z" level=info msg="containerd successfully booted in 0.022526s"
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.512068043Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.521019942Z" level=info msg="Loading containers: start."
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.616685011Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.681522031Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.614445200Z" level=info msg="Loading containers: done."
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.631575085Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.631860425Z" level=info msg="Daemon has completed initialization"
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.655164938Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.655239665Z" level=info msg="API listen on [::]:2376"
	Jul 19 19:02:31 multinode-871000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.638339545Z" level=info msg="Processing signal 'terminated'"
	Jul 19 19:02:32 multinode-871000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639494009Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639765769Z" level=info msg="Daemon shutdown complete"
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639870632Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.640041119Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 19:02:33 multinode-871000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 19:02:33 multinode-871000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 19:02:33 multinode-871000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 19:02:33 multinode-871000-m03 dockerd[846]: time="2024-07-19T19:02:33.684394739Z" level=info msg="Starting up"
	Jul 19 19:03:33 multinode-871000-m03 dockerd[846]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 19:03:33 multinode-871000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 19:03:33 multinode-871000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 19:03:33 multinode-871000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 12:03:33.535347    4831 out.go:239] * 
	* 
	W0719 12:03:33.536533    4831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:03:33.599187    4831 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-871000" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-871000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-871000 -n multinode-871000
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-871000 logs -n 25: (2.810568306s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-871000 ssh -n                                                                                                     | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-871000 cp multinode-871000-m02:/home/docker/cp-test.txt                                                           | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1594904202/001/cp-test_multinode-871000-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n                                                                                                     | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-871000 cp multinode-871000-m02:/home/docker/cp-test.txt                                                           | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000:/home/docker/cp-test_multinode-871000-m02_multinode-871000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n                                                                                                     | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n multinode-871000 sudo cat                                                                           | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | /home/docker/cp-test_multinode-871000-m02_multinode-871000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-871000 cp multinode-871000-m02:/home/docker/cp-test.txt                                                           | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m03:/home/docker/cp-test_multinode-871000-m02_multinode-871000-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n                                                                                                     | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n multinode-871000-m03 sudo cat                                                                       | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | /home/docker/cp-test_multinode-871000-m02_multinode-871000-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-871000 cp testdata/cp-test.txt                                                                                    | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n                                                                                                     | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-871000 cp multinode-871000-m03:/home/docker/cp-test.txt                                                           | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1594904202/001/cp-test_multinode-871000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n                                                                                                     | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-871000 cp multinode-871000-m03:/home/docker/cp-test.txt                                                           | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000:/home/docker/cp-test_multinode-871000-m03_multinode-871000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n                                                                                                     | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n multinode-871000 sudo cat                                                                           | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | /home/docker/cp-test_multinode-871000-m03_multinode-871000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-871000 cp multinode-871000-m03:/home/docker/cp-test.txt                                                           | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m02:/home/docker/cp-test_multinode-871000-m03_multinode-871000-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n                                                                                                     | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | multinode-871000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-871000 ssh -n multinode-871000-m02 sudo cat                                                                       | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | /home/docker/cp-test_multinode-871000-m03_multinode-871000-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-871000 node stop m03                                                                                              | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	| node    | multinode-871000 node start                                                                                                 | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT | 19 Jul 24 12:00 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                                  |                  |         |         |                     |                     |
	| node    | list -p multinode-871000                                                                                                    | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 12:00 PDT |                     |
	| stop    | -p multinode-871000                                                                                                         | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 12:00 PDT | 19 Jul 24 12:00 PDT |
	| start   | -p multinode-871000                                                                                                         | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 12:00 PDT |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-871000                                                                                                    | multinode-871000 | jenkins | v1.33.1 | 19 Jul 24 12:03 PDT |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 12:00:32
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 12:00:32.402048    4831 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:00:32.402301    4831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:00:32.402306    4831 out.go:304] Setting ErrFile to fd 2...
	I0719 12:00:32.402310    4831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:00:32.402455    4831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	I0719 12:00:32.403925    4831 out.go:298] Setting JSON to false
	I0719 12:00:32.426276    4831 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3602,"bootTime":1721412030,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0719 12:00:32.426364    4831 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:00:32.447891    4831 out.go:177] * [multinode-871000] minikube v1.33.1 on Darwin 14.5
	I0719 12:00:32.489466    4831 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:00:32.489530    4831 notify.go:220] Checking for updates...
	I0719 12:00:32.533563    4831 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:00:32.554596    4831 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 12:00:32.575798    4831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:00:32.596829    4831 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	I0719 12:00:32.618587    4831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:00:32.640567    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:00:32.640788    4831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:00:32.641433    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:00:32.641515    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:00:32.651128    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53156
	I0719 12:00:32.651644    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:00:32.652227    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:00:32.652240    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:00:32.652558    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:00:32.652848    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:32.681431    4831 out.go:177] * Using the hyperkit driver based on existing profile
	I0719 12:00:32.723797    4831 start.go:297] selected driver: hyperkit
	I0719 12:00:32.723820    4831 start.go:901] validating driver "hyperkit" against &{Name:multinode-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.19 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:00:32.724058    4831 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:00:32.724240    4831 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:00:32.724440    4831 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19307-1053/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0719 12:00:32.734251    4831 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0719 12:00:32.738039    4831 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:00:32.738061    4831 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0719 12:00:32.741095    4831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:00:32.741160    4831 cni.go:84] Creating CNI manager for ""
	I0719 12:00:32.741169    4831 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 12:00:32.741249    4831 start.go:340] cluster config:
	{Name:multinode-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-871000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.19 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:00:32.741357    4831 iso.go:125] acquiring lock: {Name:mkefd37d87f1d623b7fad18d7afa6e68e29a5c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:00:32.783504    4831 out.go:177] * Starting "multinode-871000" primary control-plane node in "multinode-871000" cluster
	I0719 12:00:32.804769    4831 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:00:32.804839    4831 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 12:00:32.804870    4831 cache.go:56] Caching tarball of preloaded images
	I0719 12:00:32.805070    4831 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 12:00:32.805092    4831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:00:32.805281    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:00:32.806338    4831 start.go:360] acquireMachinesLock for multinode-871000: {Name:mk9f33e92e6d472bd2fb7a1dc1c9d72253ce59c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:00:32.806514    4831 start.go:364] duration metric: took 150.522µs to acquireMachinesLock for "multinode-871000"
	I0719 12:00:32.806547    4831 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:00:32.806566    4831 fix.go:54] fixHost starting: 
	I0719 12:00:32.806933    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:00:32.806964    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:00:32.815725    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53158
	I0719 12:00:32.816080    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:00:32.816412    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:00:32.816423    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:00:32.816631    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:00:32.816759    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:32.816865    4831 main.go:141] libmachine: (multinode-871000) Calling .GetState
	I0719 12:00:32.816949    4831 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:00:32.817026    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid from json: 4202
	I0719 12:00:32.817953    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid 4202 missing from process table
	I0719 12:00:32.817998    4831 fix.go:112] recreateIfNeeded on multinode-871000: state=Stopped err=<nil>
	I0719 12:00:32.818017    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	W0719 12:00:32.818110    4831 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:00:32.860589    4831 out.go:177] * Restarting existing hyperkit VM for "multinode-871000" ...
	I0719 12:00:32.883761    4831 main.go:141] libmachine: (multinode-871000) Calling .Start
	I0719 12:00:32.884214    4831 main.go:141] libmachine: (multinode-871000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/hyperkit.pid
	I0719 12:00:32.884261    4831 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:00:32.885990    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid 4202 missing from process table
	I0719 12:00:32.886013    4831 main.go:141] libmachine: (multinode-871000) DBG | pid 4202 is in state "Stopped"
	I0719 12:00:32.886031    4831 main.go:141] libmachine: (multinode-871000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/hyperkit.pid...
	I0719 12:00:32.886224    4831 main.go:141] libmachine: (multinode-871000) DBG | Using UUID 50732e8d-1439-4d54-9eb1-76002314766d
	I0719 12:00:32.993265    4831 main.go:141] libmachine: (multinode-871000) DBG | Generated MAC f2:4c:c6:88:73:ec
	I0719 12:00:32.993291    4831 main.go:141] libmachine: (multinode-871000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000
	I0719 12:00:32.993436    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"50732e8d-1439-4d54-9eb1-76002314766d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000381500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0719 12:00:32.993474    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"50732e8d-1439-4d54-9eb1-76002314766d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000381500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0719 12:00:32.993514    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "50732e8d-1439-4d54-9eb1-76002314766d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/multinode-871000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/bzimage,/Users/jenkins/minikube-integration/1930
7-1053/.minikube/machines/multinode-871000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"}
	I0719 12:00:32.993552    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 50732e8d-1439-4d54-9eb1-76002314766d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/multinode-871000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/console-ring -f kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/bzimage,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"
	I0719 12:00:32.993570    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0719 12:00:32.995054    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:32 DEBUG: hyperkit: Pid is 4843
	I0719 12:00:32.995478    4831 main.go:141] libmachine: (multinode-871000) DBG | Attempt 0
	I0719 12:00:32.995491    4831 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:00:32.995589    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid from json: 4843
	I0719 12:00:32.997408    4831 main.go:141] libmachine: (multinode-871000) DBG | Searching for f2:4c:c6:88:73:ec in /var/db/dhcpd_leases ...
	I0719 12:00:32.997496    4831 main.go:141] libmachine: (multinode-871000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0719 12:00:32.997527    4831 main.go:141] libmachine: (multinode-871000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:5e:a3:f5:89:e4:9e ID:1,5e:a3:f5:89:e4:9e Lease:0x669ab7be}
	I0719 12:00:32.997541    4831 main.go:141] libmachine: (multinode-871000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:36:3f:5c:47:18:4c ID:1,36:3f:5c:47:18:4c Lease:0x669c0844}
	I0719 12:00:32.997551    4831 main.go:141] libmachine: (multinode-871000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:82:41:5c:70:34:46 ID:1,82:41:5c:70:34:46 Lease:0x669c0833}
	I0719 12:00:32.997564    4831 main.go:141] libmachine: (multinode-871000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:4c:c6:88:73:ec ID:1,f2:4c:c6:88:73:ec Lease:0x669c07f3}
	I0719 12:00:32.997575    4831 main.go:141] libmachine: (multinode-871000) DBG | Found match: f2:4c:c6:88:73:ec
	I0719 12:00:32.997597    4831 main.go:141] libmachine: (multinode-871000) DBG | IP: 192.169.0.16
	I0719 12:00:32.997640    4831 main.go:141] libmachine: (multinode-871000) Calling .GetConfigRaw
	I0719 12:00:32.998375    4831 main.go:141] libmachine: (multinode-871000) Calling .GetIP
	I0719 12:00:32.998583    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:00:32.999156    4831 machine.go:94] provisionDockerMachine start ...
	I0719 12:00:32.999170    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:32.999304    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:32.999433    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:32.999560    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:32.999695    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:32.999811    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:32.999943    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:33.000178    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:33.000187    4831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 12:00:33.003121    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0719 12:00:33.056538    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0719 12:00:33.057630    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:00:33.057646    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:00:33.057655    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:00:33.057661    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:00:33.434726    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0719 12:00:33.434742    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0719 12:00:33.549270    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:00:33.549284    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:00:33.549315    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:00:33.549350    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:00:33.550217    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0719 12:00:33.550231    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0719 12:00:38.801284    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0719 12:00:38.801336    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0719 12:00:38.801347    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0719 12:00:38.825961    4831 main.go:141] libmachine: (multinode-871000) DBG | 2024/07/19 12:00:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0719 12:00:44.069201    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 12:00:44.069216    4831 main.go:141] libmachine: (multinode-871000) Calling .GetMachineName
	I0719 12:00:44.069367    4831 buildroot.go:166] provisioning hostname "multinode-871000"
	I0719 12:00:44.069379    4831 main.go:141] libmachine: (multinode-871000) Calling .GetMachineName
	I0719 12:00:44.069499    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.069604    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.069698    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.069853    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.069950    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.070077    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:44.070222    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:44.070231    4831 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-871000 && echo "multinode-871000" | sudo tee /etc/hostname
	I0719 12:00:44.141472    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-871000
	
	I0719 12:00:44.141490    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.141615    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.141731    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.141817    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.141903    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.142025    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:44.142169    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:44.142180    4831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-871000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-871000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-871000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 12:00:44.211399    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 12:00:44.211422    4831 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19307-1053/.minikube CaCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19307-1053/.minikube}
	I0719 12:00:44.211437    4831 buildroot.go:174] setting up certificates
	I0719 12:00:44.211452    4831 provision.go:84] configureAuth start
	I0719 12:00:44.211466    4831 main.go:141] libmachine: (multinode-871000) Calling .GetMachineName
	I0719 12:00:44.211600    4831 main.go:141] libmachine: (multinode-871000) Calling .GetIP
	I0719 12:00:44.211700    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.211795    4831 provision.go:143] copyHostCerts
	I0719 12:00:44.211827    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:00:44.211901    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem, removing ...
	I0719 12:00:44.211908    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:00:44.212041    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem (1078 bytes)
	I0719 12:00:44.212239    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:00:44.212281    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem, removing ...
	I0719 12:00:44.212286    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:00:44.212365    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem (1123 bytes)
	I0719 12:00:44.212531    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:00:44.212571    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem, removing ...
	I0719 12:00:44.212576    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:00:44.212657    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem (1675 bytes)
	I0719 12:00:44.212798    4831 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem org=jenkins.multinode-871000 san=[127.0.0.1 192.169.0.16 localhost minikube multinode-871000]
	I0719 12:00:44.439259    4831 provision.go:177] copyRemoteCerts
	I0719 12:00:44.439310    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 12:00:44.439346    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.439552    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.439711    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.439856    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.439954    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:00:44.479237    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 12:00:44.479307    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 12:00:44.499560    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 12:00:44.499626    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 12:00:44.520339    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 12:00:44.520397    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 12:00:44.539283    4831 provision.go:87] duration metric: took 327.817751ms to configureAuth
	I0719 12:00:44.539295    4831 buildroot.go:189] setting minikube options for container-runtime
	I0719 12:00:44.539471    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:00:44.539484    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:44.539631    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.539733    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.539816    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.539911    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.539992    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.540102    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:44.540227    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:44.540235    4831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 12:00:44.604508    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 12:00:44.604520    4831 buildroot.go:70] root file system type: tmpfs
	I0719 12:00:44.604598    4831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 12:00:44.604611    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.604749    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.604839    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.604930    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.605024    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.605164    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:44.605321    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:44.605367    4831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 12:00:44.678347    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 12:00:44.678367    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:44.678528    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:44.678629    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.678719    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:44.678800    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:44.678932    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:44.679072    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:44.679085    4831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 12:00:46.310178    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 12:00:46.310192    4831 machine.go:97] duration metric: took 13.311069223s to provisionDockerMachine
	I0719 12:00:46.310205    4831 start.go:293] postStartSetup for "multinode-871000" (driver="hyperkit")
	I0719 12:00:46.310213    4831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 12:00:46.310226    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:46.310428    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 12:00:46.310443    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:46.310533    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:46.310628    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:46.310726    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:46.310830    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:00:46.347950    4831 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 12:00:46.350937    4831 command_runner.go:130] > NAME=Buildroot
	I0719 12:00:46.350945    4831 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 12:00:46.350949    4831 command_runner.go:130] > ID=buildroot
	I0719 12:00:46.350953    4831 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 12:00:46.350957    4831 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 12:00:46.351059    4831 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 12:00:46.351070    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/addons for local assets ...
	I0719 12:00:46.351163    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/files for local assets ...
	I0719 12:00:46.351361    4831 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> 15922.pem in /etc/ssl/certs
	I0719 12:00:46.351367    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> /etc/ssl/certs/15922.pem
	I0719 12:00:46.351573    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 12:00:46.359513    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem --> /etc/ssl/certs/15922.pem (1708 bytes)
	I0719 12:00:46.378368    4831 start.go:296] duration metric: took 68.150448ms for postStartSetup
	I0719 12:00:46.378390    4831 fix.go:56] duration metric: took 13.571877481s for fixHost
	I0719 12:00:46.378414    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:46.378543    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:46.378630    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:46.378721    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:46.378806    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:46.378925    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:00:46.379066    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.16 22 <nil> <nil>}
	I0719 12:00:46.379074    4831 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 12:00:46.440347    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415646.621837239
	
	I0719 12:00:46.440359    4831 fix.go:216] guest clock: 1721415646.621837239
	I0719 12:00:46.440364    4831 fix.go:229] Guest: 2024-07-19 12:00:46.621837239 -0700 PDT Remote: 2024-07-19 12:00:46.378392 -0700 PDT m=+14.013022435 (delta=243.445239ms)
	I0719 12:00:46.440383    4831 fix.go:200] guest clock delta is within tolerance: 243.445239ms
	I0719 12:00:46.440386    4831 start.go:83] releasing machines lock for "multinode-871000", held for 13.633904801s
	I0719 12:00:46.440405    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:46.440536    4831 main.go:141] libmachine: (multinode-871000) Calling .GetIP
	I0719 12:00:46.440638    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:46.440941    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:46.441055    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:00:46.441135    4831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 12:00:46.441166    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:46.441190    4831 ssh_runner.go:195] Run: cat /version.json
	I0719 12:00:46.441201    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:00:46.441316    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:46.441330    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:00:46.441411    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:46.441438    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:00:46.441503    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:46.441561    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:00:46.441589    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:00:46.441646    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:00:46.475255    4831 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 12:00:46.475472    4831 ssh_runner.go:195] Run: systemctl --version
	I0719 12:00:46.523829    4831 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 12:00:46.524847    4831 command_runner.go:130] > systemd 252 (252)
	I0719 12:00:46.524884    4831 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 12:00:46.525018    4831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 12:00:46.530028    4831 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 12:00:46.530049    4831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 12:00:46.530083    4831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 12:00:46.542690    4831 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 12:00:46.542713    4831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 12:00:46.542722    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:00:46.542816    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:00:46.557179    4831 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 12:00:46.557498    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 12:00:46.565823    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 12:00:46.573971    4831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 12:00:46.574016    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 12:00:46.582254    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:00:46.594975    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 12:00:46.608869    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:00:46.621959    4831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 12:00:46.634841    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 12:00:46.646924    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 12:00:46.656013    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 12:00:46.664861    4831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 12:00:46.672750    4831 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 12:00:46.672905    4831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 12:00:46.680831    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:46.777522    4831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 12:00:46.796367    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:00:46.796441    4831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 12:00:46.816014    4831 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 12:00:46.816025    4831 command_runner.go:130] > [Unit]
	I0719 12:00:46.816032    4831 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 12:00:46.816036    4831 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 12:00:46.816041    4831 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 12:00:46.816045    4831 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 12:00:46.816052    4831 command_runner.go:130] > StartLimitBurst=3
	I0719 12:00:46.816057    4831 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 12:00:46.816063    4831 command_runner.go:130] > [Service]
	I0719 12:00:46.816068    4831 command_runner.go:130] > Type=notify
	I0719 12:00:46.816074    4831 command_runner.go:130] > Restart=on-failure
	I0719 12:00:46.816081    4831 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 12:00:46.816088    4831 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 12:00:46.816099    4831 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 12:00:46.816107    4831 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 12:00:46.816120    4831 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 12:00:46.816126    4831 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 12:00:46.816134    4831 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 12:00:46.816143    4831 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 12:00:46.816150    4831 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 12:00:46.816155    4831 command_runner.go:130] > ExecStart=
	I0719 12:00:46.816166    4831 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0719 12:00:46.816171    4831 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 12:00:46.816178    4831 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 12:00:46.816183    4831 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 12:00:46.816187    4831 command_runner.go:130] > LimitNOFILE=infinity
	I0719 12:00:46.816191    4831 command_runner.go:130] > LimitNPROC=infinity
	I0719 12:00:46.816194    4831 command_runner.go:130] > LimitCORE=infinity
	I0719 12:00:46.816199    4831 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 12:00:46.816204    4831 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 12:00:46.816207    4831 command_runner.go:130] > TasksMax=infinity
	I0719 12:00:46.816210    4831 command_runner.go:130] > TimeoutStartSec=0
	I0719 12:00:46.816215    4831 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 12:00:46.816220    4831 command_runner.go:130] > Delegate=yes
	I0719 12:00:46.816225    4831 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 12:00:46.816228    4831 command_runner.go:130] > KillMode=process
	I0719 12:00:46.816232    4831 command_runner.go:130] > [Install]
	I0719 12:00:46.816241    4831 command_runner.go:130] > WantedBy=multi-user.target
	I0719 12:00:46.816301    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:00:46.828017    4831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 12:00:46.841820    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:00:46.854311    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:00:46.865403    4831 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 12:00:46.885334    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:00:46.896643    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:00:46.911154    4831 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 12:00:46.911543    4831 ssh_runner.go:195] Run: which cri-dockerd
	I0719 12:00:46.914439    4831 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 12:00:46.914592    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 12:00:46.922635    4831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 12:00:46.935842    4831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 12:00:47.032258    4831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 12:00:47.146507    4831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 12:00:47.146582    4831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 12:00:47.160491    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:47.256476    4831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 12:00:49.580336    4831 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323849451s)
	I0719 12:00:49.580400    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 12:00:49.591628    4831 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 12:00:49.604680    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 12:00:49.615365    4831 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 12:00:49.709475    4831 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 12:00:49.817392    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:49.913248    4831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 12:00:49.926239    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 12:00:49.937484    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:50.040074    4831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 12:00:50.095245    4831 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 12:00:50.095324    4831 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 12:00:50.099885    4831 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 12:00:50.099906    4831 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 12:00:50.099912    4831 command_runner.go:130] > Device: 0,22	Inode: 741         Links: 1
	I0719 12:00:50.099917    4831 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0719 12:00:50.099920    4831 command_runner.go:130] > Access: 2024-07-19 19:00:50.234837118 +0000
	I0719 12:00:50.099925    4831 command_runner.go:130] > Modify: 2024-07-19 19:00:50.234837118 +0000
	I0719 12:00:50.099929    4831 command_runner.go:130] > Change: 2024-07-19 19:00:50.236836876 +0000
	I0719 12:00:50.099932    4831 command_runner.go:130] >  Birth: -
	I0719 12:00:50.100165    4831 start.go:563] Will wait 60s for crictl version
	I0719 12:00:50.100214    4831 ssh_runner.go:195] Run: which crictl
	I0719 12:00:50.103172    4831 command_runner.go:130] > /usr/bin/crictl
	I0719 12:00:50.103490    4831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 12:00:50.128144    4831 command_runner.go:130] > Version:  0.1.0
	I0719 12:00:50.128173    4831 command_runner.go:130] > RuntimeName:  docker
	I0719 12:00:50.128231    4831 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 12:00:50.128306    4831 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 12:00:50.129486    4831 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 12:00:50.129557    4831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 12:00:50.146155    4831 command_runner.go:130] > 27.0.3
	I0719 12:00:50.147029    4831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 12:00:50.163310    4831 command_runner.go:130] > 27.0.3
	I0719 12:00:50.209993    4831 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 12:00:50.210023    4831 main.go:141] libmachine: (multinode-871000) Calling .GetIP
	I0719 12:00:50.210225    4831 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0719 12:00:50.213504    4831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 12:00:50.223880    4831 kubeadm.go:883] updating cluster {Name:multinode-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.19 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 12:00:50.223981    4831 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:00:50.224031    4831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 12:00:50.236070    4831 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0719 12:00:50.236083    4831 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 12:00:50.236087    4831 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 12:00:50.236092    4831 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 12:00:50.236095    4831 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 12:00:50.236099    4831 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 12:00:50.236109    4831 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 12:00:50.236113    4831 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 12:00:50.236117    4831 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 12:00:50.236121    4831 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0719 12:00:50.237109    4831 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0719 12:00:50.237118    4831 docker.go:615] Images already preloaded, skipping extraction
	I0719 12:00:50.237183    4831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 12:00:50.249337    4831 command_runner.go:130] > kindest/kindnetd:v20240719-e7903573
	I0719 12:00:50.249350    4831 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 12:00:50.249354    4831 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 12:00:50.249358    4831 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 12:00:50.249362    4831 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 12:00:50.249374    4831 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 12:00:50.249379    4831 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 12:00:50.249383    4831 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 12:00:50.249387    4831 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 12:00:50.249391    4831 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0719 12:00:50.250275    4831 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0719 12:00:50.250291    4831 cache_images.go:84] Images are preloaded, skipping loading
	I0719 12:00:50.250300    4831 kubeadm.go:934] updating node { 192.169.0.16 8443 v1.30.3 docker true true} ...
	I0719 12:00:50.250379    4831 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-871000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 12:00:50.250450    4831 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 12:00:50.267413    4831 command_runner.go:130] > cgroupfs
	I0719 12:00:50.268142    4831 cni.go:84] Creating CNI manager for ""
	I0719 12:00:50.268153    4831 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 12:00:50.268162    4831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 12:00:50.268187    4831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.16 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-871000 NodeName:multinode-871000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 12:00:50.268266    4831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-871000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 12:00:50.268328    4831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 12:00:50.276647    4831 command_runner.go:130] > kubeadm
	I0719 12:00:50.276655    4831 command_runner.go:130] > kubectl
	I0719 12:00:50.276658    4831 command_runner.go:130] > kubelet
	I0719 12:00:50.276767    4831 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 12:00:50.276810    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 12:00:50.284703    4831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0719 12:00:50.297809    4831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 12:00:50.311249    4831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0719 12:00:50.325231    4831 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0719 12:00:50.328198    4831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 12:00:50.338427    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:50.433524    4831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:00:50.448598    4831 certs.go:68] Setting up /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000 for IP: 192.169.0.16
	I0719 12:00:50.448610    4831 certs.go:194] generating shared ca certs ...
	I0719 12:00:50.448620    4831 certs.go:226] acquiring lock for ca certs: {Name:mk78732514e475c67b8a22bdfb9da389d614aef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:00:50.448815    4831 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key
	I0719 12:00:50.448890    4831 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key
	I0719 12:00:50.448900    4831 certs.go:256] generating profile certs ...
	I0719 12:00:50.449015    4831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.key
	I0719 12:00:50.449096    4831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/apiserver.key.70f33c4b
	I0719 12:00:50.449168    4831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/proxy-client.key
	I0719 12:00:50.449175    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 12:00:50.449197    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 12:00:50.449217    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 12:00:50.449237    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 12:00:50.449261    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 12:00:50.449294    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 12:00:50.449325    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 12:00:50.449344    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 12:00:50.449453    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem (1338 bytes)
	W0719 12:00:50.449504    4831 certs.go:480] ignoring /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592_empty.pem, impossibly tiny 0 bytes
	I0719 12:00:50.449512    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 12:00:50.449558    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem (1078 bytes)
	I0719 12:00:50.449602    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem (1123 bytes)
	I0719 12:00:50.449649    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem (1675 bytes)
	I0719 12:00:50.449742    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem (1708 bytes)
	I0719 12:00:50.449787    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:00:50.449808    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem -> /usr/share/ca-certificates/1592.pem
	I0719 12:00:50.449826    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> /usr/share/ca-certificates/15922.pem
	I0719 12:00:50.450284    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 12:00:50.486710    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 12:00:50.510026    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 12:00:50.533733    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 12:00:50.555585    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 12:00:50.581721    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 12:00:50.601477    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 12:00:50.621221    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 12:00:50.641584    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 12:00:50.661200    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem --> /usr/share/ca-certificates/1592.pem (1338 bytes)
	I0719 12:00:50.681320    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem --> /usr/share/ca-certificates/15922.pem (1708 bytes)
	I0719 12:00:50.701028    4831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 12:00:50.714328    4831 ssh_runner.go:195] Run: openssl version
	I0719 12:00:50.718364    4831 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 12:00:50.718501    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 12:00:50.726857    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:00:50.730156    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:00:50.730260    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:00:50.730295    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:00:50.734347    4831 command_runner.go:130] > b5213941
	I0719 12:00:50.734466    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 12:00:50.742662    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1592.pem && ln -fs /usr/share/ca-certificates/1592.pem /etc/ssl/certs/1592.pem"
	I0719 12:00:50.750882    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1592.pem
	I0719 12:00:50.754122    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 18:22 /usr/share/ca-certificates/1592.pem
	I0719 12:00:50.754254    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:22 /usr/share/ca-certificates/1592.pem
	I0719 12:00:50.754291    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1592.pem
	I0719 12:00:50.758486    4831 command_runner.go:130] > 51391683
	I0719 12:00:50.758538    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1592.pem /etc/ssl/certs/51391683.0"
	I0719 12:00:50.766824    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15922.pem && ln -fs /usr/share/ca-certificates/15922.pem /etc/ssl/certs/15922.pem"
	I0719 12:00:50.775119    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15922.pem
	I0719 12:00:50.778582    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 18:22 /usr/share/ca-certificates/15922.pem
	I0719 12:00:50.778593    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:22 /usr/share/ca-certificates/15922.pem
	I0719 12:00:50.778630    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15922.pem
	I0719 12:00:50.782894    4831 command_runner.go:130] > 3ec20f2e
	I0719 12:00:50.783005    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15922.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 12:00:50.791555    4831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 12:00:50.795062    4831 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 12:00:50.795072    4831 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 12:00:50.795077    4831 command_runner.go:130] > Device: 253,1	Inode: 531528      Links: 1
	I0719 12:00:50.795082    4831 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 12:00:50.795090    4831 command_runner.go:130] > Access: 2024-07-19 18:54:57.287531357 +0000
	I0719 12:00:50.795095    4831 command_runner.go:130] > Modify: 2024-07-19 18:54:57.287531357 +0000
	I0719 12:00:50.795106    4831 command_runner.go:130] > Change: 2024-07-19 18:54:57.287531357 +0000
	I0719 12:00:50.795111    4831 command_runner.go:130] >  Birth: 2024-07-19 18:54:57.287531357 +0000
	I0719 12:00:50.795154    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 12:00:50.799586    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.799648    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 12:00:50.804014    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.804063    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 12:00:50.808385    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.808509    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 12:00:50.812809    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.812882    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 12:00:50.817105    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.817155    4831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 12:00:50.821468    4831 command_runner.go:130] > Certificate will not expire
	I0719 12:00:50.821558    4831 kubeadm.go:392] StartCluster: {Name:multinode-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.19 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:00:50.821673    4831 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 12:00:50.833650    4831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 12:00:50.841330    4831 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0719 12:00:50.841339    4831 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0719 12:00:50.841344    4831 command_runner.go:130] > /var/lib/minikube/etcd:
	I0719 12:00:50.841363    4831 command_runner.go:130] > member
	I0719 12:00:50.841375    4831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 12:00:50.841386    4831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 12:00:50.841422    4831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 12:00:50.848761    4831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 12:00:50.849095    4831 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-871000" does not appear in /Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:00:50.849177    4831 kubeconfig.go:62] /Users/jenkins/minikube-integration/19307-1053/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-871000" cluster setting kubeconfig missing "multinode-871000" context setting]
	I0719 12:00:50.849405    4831 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1053/kubeconfig: {Name:mk7cfae7eb77889432abd85178928820b2e794ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:00:50.850051    4831 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:00:50.850266    4831 kapi.go:59] client config for multinode-871000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xebf8ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 12:00:50.850580    4831 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 12:00:50.850753    4831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 12:00:50.857853    4831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.16
	I0719 12:00:50.857870    4831 kubeadm.go:1160] stopping kube-system containers ...
	I0719 12:00:50.857927    4831 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 12:00:50.871812    4831 command_runner.go:130] > 5a07c503ef10
	I0719 12:00:50.871825    4831 command_runner.go:130] > 1a451af36360
	I0719 12:00:50.871829    4831 command_runner.go:130] > 6ddb80b3c9e9
	I0719 12:00:50.871834    4831 command_runner.go:130] > c0dd65646579
	I0719 12:00:50.871851    4831 command_runner.go:130] > 9fb6361ebde6
	I0719 12:00:50.871855    4831 command_runner.go:130] > a2327b8c83c0
	I0719 12:00:50.871858    4831 command_runner.go:130] > 492c042de032
	I0719 12:00:50.871861    4831 command_runner.go:130] > 587cdaf6e20c
	I0719 12:00:50.871865    4831 command_runner.go:130] > a094a5e71d55
	I0719 12:00:50.871868    4831 command_runner.go:130] > a69e88441e03
	I0719 12:00:50.871874    4831 command_runner.go:130] > e5a9045d5578
	I0719 12:00:50.871878    4831 command_runner.go:130] > 72d515f79956
	I0719 12:00:50.871881    4831 command_runner.go:130] > ae60ee8266a7
	I0719 12:00:50.871884    4831 command_runner.go:130] > ce0d6620b5f9
	I0719 12:00:50.871891    4831 command_runner.go:130] > 2fb0e3bd3145
	I0719 12:00:50.871895    4831 command_runner.go:130] > 48bd43fcf8d2
	I0719 12:00:50.872623    4831 docker.go:483] Stopping containers: [5a07c503ef10 1a451af36360 6ddb80b3c9e9 c0dd65646579 9fb6361ebde6 a2327b8c83c0 492c042de032 587cdaf6e20c a094a5e71d55 a69e88441e03 e5a9045d5578 72d515f79956 ae60ee8266a7 ce0d6620b5f9 2fb0e3bd3145 48bd43fcf8d2]
	I0719 12:00:50.872690    4831 ssh_runner.go:195] Run: docker stop 5a07c503ef10 1a451af36360 6ddb80b3c9e9 c0dd65646579 9fb6361ebde6 a2327b8c83c0 492c042de032 587cdaf6e20c a094a5e71d55 a69e88441e03 e5a9045d5578 72d515f79956 ae60ee8266a7 ce0d6620b5f9 2fb0e3bd3145 48bd43fcf8d2
	I0719 12:00:50.884270    4831 command_runner.go:130] > 5a07c503ef10
	I0719 12:00:50.885737    4831 command_runner.go:130] > 1a451af36360
	I0719 12:00:50.885748    4831 command_runner.go:130] > 6ddb80b3c9e9
	I0719 12:00:50.885752    4831 command_runner.go:130] > c0dd65646579
	I0719 12:00:50.885756    4831 command_runner.go:130] > 9fb6361ebde6
	I0719 12:00:50.885759    4831 command_runner.go:130] > a2327b8c83c0
	I0719 12:00:50.885764    4831 command_runner.go:130] > 492c042de032
	I0719 12:00:50.886041    4831 command_runner.go:130] > 587cdaf6e20c
	I0719 12:00:50.886104    4831 command_runner.go:130] > a094a5e71d55
	I0719 12:00:50.886154    4831 command_runner.go:130] > a69e88441e03
	I0719 12:00:50.886163    4831 command_runner.go:130] > e5a9045d5578
	I0719 12:00:50.886167    4831 command_runner.go:130] > 72d515f79956
	I0719 12:00:50.886170    4831 command_runner.go:130] > ae60ee8266a7
	I0719 12:00:50.886458    4831 command_runner.go:130] > ce0d6620b5f9
	I0719 12:00:50.886466    4831 command_runner.go:130] > 2fb0e3bd3145
	I0719 12:00:50.886471    4831 command_runner.go:130] > 48bd43fcf8d2
	I0719 12:00:50.887418    4831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 12:00:50.899484    4831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 12:00:50.906848    4831 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0719 12:00:50.906859    4831 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0719 12:00:50.906864    4831 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0719 12:00:50.906870    4831 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 12:00:50.907008    4831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 12:00:50.907023    4831 kubeadm.go:157] found existing configuration files:
	
	I0719 12:00:50.907067    4831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 12:00:50.914102    4831 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 12:00:50.914121    4831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 12:00:50.914163    4831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 12:00:50.921369    4831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 12:00:50.928626    4831 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 12:00:50.928646    4831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 12:00:50.928691    4831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 12:00:50.936009    4831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 12:00:50.942964    4831 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 12:00:50.942985    4831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 12:00:50.943022    4831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 12:00:50.950263    4831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 12:00:50.957315    4831 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 12:00:50.957328    4831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 12:00:50.957364    4831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 12:00:50.964862    4831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 12:00:50.972390    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:51.035216    4831 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 12:00:51.035258    4831 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0719 12:00:51.035483    4831 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0719 12:00:51.035573    4831 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 12:00:51.035813    4831 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0719 12:00:51.035962    4831 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0719 12:00:51.036249    4831 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0719 12:00:51.036392    4831 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0719 12:00:51.036567    4831 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0719 12:00:51.036704    4831 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 12:00:51.036845    4831 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 12:00:51.037805    4831 command_runner.go:130] > [certs] Using the existing "sa" key
	I0719 12:00:51.037917    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:51.076683    4831 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 12:00:51.282204    4831 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 12:00:51.377771    4831 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 12:00:51.638949    4831 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 12:00:51.795924    4831 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 12:00:51.912126    4831 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 12:00:51.913978    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:51.962857    4831 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 12:00:51.964163    4831 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 12:00:51.964173    4831 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0719 12:00:52.077102    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:52.122974    4831 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 12:00:52.122990    4831 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 12:00:52.129548    4831 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 12:00:52.132019    4831 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 12:00:52.136000    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:52.209857    4831 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 12:00:52.217061    4831 api_server.go:52] waiting for apiserver process to appear ...
	I0719 12:00:52.217135    4831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:00:52.717207    4831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:00:53.217616    4831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:00:53.230644    4831 command_runner.go:130] > 1608
	I0719 12:00:53.230846    4831 api_server.go:72] duration metric: took 1.013796396s to wait for apiserver process to appear ...
	I0719 12:00:53.230858    4831 api_server.go:88] waiting for apiserver healthz status ...
	I0719 12:00:53.230876    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:00:55.238758    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 12:00:55.238773    4831 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 12:00:55.238784    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:00:55.279295    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 12:00:55.279319    4831 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 12:00:55.730933    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:00:55.735667    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 12:00:55.735678    4831 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 12:00:56.231298    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:00:56.235031    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 12:00:56.235046    4831 api_server.go:103] status: https://192.169.0.16:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 12:00:56.732459    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:00:56.736437    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0719 12:00:56.736498    4831 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0719 12:00:56.736504    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:56.736512    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:56.736517    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:56.741567    4831 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 12:00:56.741579    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:56.741585    4831 round_trippers.go:580]     Audit-Id: 775c2944-6cec-4689-817e-4a722972a289
	I0719 12:00:56.741588    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:56.741591    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:56.741594    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:56.741597    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:56.741600    4831 round_trippers.go:580]     Content-Length: 263
	I0719 12:00:56.741603    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:56 GMT
	I0719 12:00:56.741623    4831 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0719 12:00:56.741668    4831 api_server.go:141] control plane version: v1.30.3
	I0719 12:00:56.741679    4831 api_server.go:131] duration metric: took 3.510827315s to wait for apiserver health ...
	I0719 12:00:56.741684    4831 cni.go:84] Creating CNI manager for ""
	I0719 12:00:56.741688    4831 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 12:00:56.781141    4831 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 12:00:56.817916    4831 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 12:00:56.823787    4831 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0719 12:00:56.823802    4831 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0719 12:00:56.823807    4831 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0719 12:00:56.823812    4831 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 12:00:56.823817    4831 command_runner.go:130] > Access: 2024-07-19 19:00:42.772281672 +0000
	I0719 12:00:56.823821    4831 command_runner.go:130] > Modify: 2024-07-18 23:04:21.000000000 +0000
	I0719 12:00:56.823826    4831 command_runner.go:130] > Change: 2024-07-19 19:00:40.582734066 +0000
	I0719 12:00:56.823829    4831 command_runner.go:130] >  Birth: -
	I0719 12:00:56.823866    4831 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 12:00:56.823872    4831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 12:00:56.857940    4831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 12:00:57.254573    4831 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0719 12:00:57.277003    4831 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0719 12:00:57.402285    4831 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0719 12:00:57.454938    4831 command_runner.go:130] > daemonset.apps/kindnet configured
	I0719 12:00:57.456362    4831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 12:00:57.456442    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:00:57.456452    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.456458    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.456461    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.459746    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:00:57.459754    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.459759    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.459763    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.459765    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.459768    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.459770    4831 round_trippers.go:580]     Audit-Id: 1e339931-92ed-4f97-b0ab-26c9e8a733e5
	I0719 12:00:57.459773    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.460649    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"979"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87012 chars]
	I0719 12:00:57.463655    4831 system_pods.go:59] 12 kube-system pods found
	I0719 12:00:57.463673    4831 system_pods.go:61] "coredns-7db6d8ff4d-85r26" [c7d62ec5-693b-46ab-9437-86aef8b469e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 12:00:57.463679    4831 system_pods.go:61] "etcd-multinode-871000" [8818ed52-4b2d-4629-af02-b835e3cfa034] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 12:00:57.463683    4831 system_pods.go:61] "kindnet-4stbd" [58fb2d63-07bb-4a27-87c5-4e259083f5be] Running
	I0719 12:00:57.463687    4831 system_pods.go:61] "kindnet-897rz" [a3c96d7b-9aa1-49e1-9fa6-8aad9551be4f] Running
	I0719 12:00:57.463690    4831 system_pods.go:61] "kindnet-hht5h" [f1a7b402-0cf3-469c-8124-6b53aa34f4c7] Running
	I0719 12:00:57.463694    4831 system_pods.go:61] "kube-apiserver-multinode-871000" [9f3fdf92-3cbd-411c-802e-cbbbe1b60d68] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 12:00:57.463698    4831 system_pods.go:61] "kube-controller-manager-multinode-871000" [74e143fb-26b8-4d1d-b07a-f1b2c590133f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 12:00:57.463706    4831 system_pods.go:61] "kube-proxy-86ssb" [37609942-98d8-4c6b-b339-53bf3a901e3f] Running
	I0719 12:00:57.463710    4831 system_pods.go:61] "kube-proxy-89hm2" [77b4b485-53f0-4480-8b62-a1df26f037b8] Running
	I0719 12:00:57.463713    4831 system_pods.go:61] "kube-proxy-t9bqq" [5ef191fc-6e2e-486c-b825-76c6e0d95416] Running
	I0719 12:00:57.463720    4831 system_pods.go:61] "kube-scheduler-multinode-871000" [0d73182a-0458-470e-ac06-ccde27fa113a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 12:00:57.463724    4831 system_pods.go:61] "storage-provisioner" [ccd0aaec-abf0-4aec-9ebf-14f619510aeb] Running
	I0719 12:00:57.463729    4831 system_pods.go:74] duration metric: took 7.359738ms to wait for pod list to return data ...
	I0719 12:00:57.463736    4831 node_conditions.go:102] verifying NodePressure condition ...
	I0719 12:00:57.463768    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0719 12:00:57.463773    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.463779    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.463783    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.465729    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:00:57.465743    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.465770    4831 round_trippers.go:580]     Audit-Id: bbb46d1f-7fb5-4c51-a18b-f479c702e9c5
	I0719 12:00:57.465796    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.465806    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.465811    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.465814    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.465816    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.465935    4831 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"979"},"items":[{"metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14802 chars]
	I0719 12:00:57.466445    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:00:57.466457    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:00:57.466466    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:00:57.466470    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:00:57.466474    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:00:57.466476    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:00:57.466482    4831 node_conditions.go:105] duration metric: took 2.740642ms to run NodePressure ...
	I0719 12:00:57.466491    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 12:00:57.559189    4831 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0719 12:00:57.714400    4831 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0719 12:00:57.715373    4831 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 12:00:57.715432    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0719 12:00:57.715437    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.715443    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.715446    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.717456    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:57.717464    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.717469    4831 round_trippers.go:580]     Audit-Id: f420da95-9f09-4d87-b8c4-3b267b4d6865
	I0719 12:00:57.717472    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.717474    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.717477    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.717480    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.717494    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.718031    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"982"},"items":[{"metadata":{"name":"etcd-multinode-871000","namespace":"kube-system","uid":"8818ed52-4b2d-4629-af02-b835e3cfa034","resourceVersion":"952","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.mirror":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.seen":"2024-07-19T18:55:05.740545259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30912 chars]
	I0719 12:00:57.718730    4831 kubeadm.go:739] kubelet initialised
	I0719 12:00:57.718739    4831 kubeadm.go:740] duration metric: took 3.356875ms waiting for restarted kubelet to initialise ...
	I0719 12:00:57.718747    4831 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:00:57.718778    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:00:57.718783    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.718788    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.718791    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.721109    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:57.721118    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.721123    4831 round_trippers.go:580]     Audit-Id: 7c174756-6655-4d34-8f82-e9921bf5bed0
	I0719 12:00:57.721127    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.721132    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.721136    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.721140    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.721144    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.721839    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"982"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87012 chars]
	I0719 12:00:57.723644    4831 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:57.723690    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:00:57.723696    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.723702    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.723706    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.724990    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:00:57.724998    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.725004    4831 round_trippers.go:580]     Audit-Id: 1c96e36b-039e-436f-ac73-b69e67d82f1f
	I0719 12:00:57.725010    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.725014    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.725019    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.725022    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.725026    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.725179    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:00:57.725410    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:57.725417    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.725422    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.725427    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.726851    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:00:57.726861    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.726867    4831 round_trippers.go:580]     Audit-Id: 2628c7a8-ce9d-4c5b-b9d9-1338663469ee
	I0719 12:00:57.726872    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.726875    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.726879    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.726882    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.726884    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.726971    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:57.727158    4831 pod_ready.go:97] node "multinode-871000" hosting pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.727168    4831 pod_ready.go:81] duration metric: took 3.515447ms for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:57.727186    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.727195    4831 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:57.727223    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-871000
	I0719 12:00:57.727228    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.727233    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.727237    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.728361    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:00:57.728369    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.728374    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.728381    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.728386    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.728390    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.728394    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.728399    4831 round_trippers.go:580]     Audit-Id: d9c38a9e-6095-4979-a34c-4a3222140fc0
	I0719 12:00:57.728543    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-871000","namespace":"kube-system","uid":"8818ed52-4b2d-4629-af02-b835e3cfa034","resourceVersion":"952","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.mirror":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.seen":"2024-07-19T18:55:05.740545259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0719 12:00:57.728746    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:57.728753    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.728759    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.728762    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.729634    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:00:57.729643    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.729651    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.729657    4831 round_trippers.go:580]     Audit-Id: f42a4100-c3e2-41ff-aeee-49c731be4038
	I0719 12:00:57.729660    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.729664    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.729669    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.729673    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.729840    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:57.730004    4831 pod_ready.go:97] node "multinode-871000" hosting pod "etcd-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.730015    4831 pod_ready.go:81] duration metric: took 2.812926ms for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:57.730020    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "etcd-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.730029    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:57.730063    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-871000
	I0719 12:00:57.730068    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.730073    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.730078    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.730934    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:00:57.730941    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.730945    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.730951    4831 round_trippers.go:580]     Audit-Id: 6efa9f85-27dd-430e-97e8-fb170a086f2f
	I0719 12:00:57.730955    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.730960    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.730965    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.730968    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.731160    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-871000","namespace":"kube-system","uid":"9f3fdf92-3cbd-411c-802e-cbbbe1b60d68","resourceVersion":"953","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.mirror":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548209Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0719 12:00:57.731378    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:57.731384    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.731389    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.731392    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.732315    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:00:57.732323    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.732328    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.732331    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.732334    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.732336    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.732339    4831 round_trippers.go:580]     Audit-Id: 54fc7b1c-f6d5-4cd2-a2f9-7fb73f2ffe73
	I0719 12:00:57.732343    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.732413    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:57.732579    4831 pod_ready.go:97] node "multinode-871000" hosting pod "kube-apiserver-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.732588    4831 pod_ready.go:81] duration metric: took 2.55315ms for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:57.732593    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "kube-apiserver-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.732598    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:57.732625    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-871000
	I0719 12:00:57.732630    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.732635    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.732640    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.733553    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:00:57.733560    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.733565    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.733569    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.733571    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.733575    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.733578    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:57 GMT
	I0719 12:00:57.733580    4831 round_trippers.go:580]     Audit-Id: b13079da-9dc2-4160-a834-34a01e90bb5f
	I0719 12:00:57.733652    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-871000","namespace":"kube-system","uid":"74e143fb-26b8-4d1d-b07a-f1b2c590133f","resourceVersion":"950","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.mirror":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548943Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0719 12:00:57.856725    4831 request.go:629] Waited for 122.781464ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:57.856773    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:57.856784    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:57.856795    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:57.856804    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:57.859847    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:00:57.859862    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:57.859869    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:57.859874    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:58 GMT
	I0719 12:00:57.859879    4831 round_trippers.go:580]     Audit-Id: 2cc1c1bd-ec94-4674-b418-6bc8427a19bb
	I0719 12:00:57.859883    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:57.859887    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:57.859890    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:57.860050    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:57.860338    4831 pod_ready.go:97] node "multinode-871000" hosting pod "kube-controller-manager-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.860353    4831 pod_ready.go:81] duration metric: took 127.748716ms for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:57.860361    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "kube-controller-manager-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:57.860379    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:58.057325    4831 request.go:629] Waited for 196.89606ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-86ssb
	I0719 12:00:58.057373    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-86ssb
	I0719 12:00:58.057380    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:58.057390    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:58.057398    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:58.059950    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:58.059960    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:58.059965    4831 round_trippers.go:580]     Audit-Id: f10eb547-809f-4c54-a6ec-b2288b02ab01
	I0719 12:00:58.059968    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:58.059971    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:58.059974    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:58.059976    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:58.059979    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:58 GMT
	I0719 12:00:58.060113    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-86ssb","generateName":"kube-proxy-","namespace":"kube-system","uid":"37609942-98d8-4c6b-b339-53bf3a901e3f","resourceVersion":"862","creationTimestamp":"2024-07-19T18:57:03Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0719 12:00:58.257183    4831 request.go:629] Waited for 196.701375ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m03
	I0719 12:00:58.257308    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m03
	I0719 12:00:58.257320    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:58.257331    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:58.257337    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:58.259593    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:58.259606    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:58.259612    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:58.259617    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:58 GMT
	I0719 12:00:58.259620    4831 round_trippers.go:580]     Audit-Id: 4dc0fb4d-0db3-4b8d-a313-9758d1995d8b
	I0719 12:00:58.259634    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:58.259638    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:58.259643    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:58.259946    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m03","uid":"4745805a-e01a-4411-b942-abcd092662c6","resourceVersion":"889","creationTimestamp":"2024-07-19T18:59:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T11_59_53_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:59:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3641 chars]
	I0719 12:00:58.260173    4831 pod_ready.go:92] pod "kube-proxy-86ssb" in "kube-system" namespace has status "Ready":"True"
	I0719 12:00:58.260185    4831 pod_ready.go:81] duration metric: took 399.799711ms for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:58.260194    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:58.458045    4831 request.go:629] Waited for 197.804124ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:00:58.458170    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:00:58.458179    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:58.458191    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:58.458197    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:58.460538    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:58.460555    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:58.460564    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:58.460572    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:58.460578    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:58.460587    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:58 GMT
	I0719 12:00:58.460593    4831 round_trippers.go:580]     Audit-Id: 4c96c3a8-6e4e-4dba-8774-2c436d82589a
	I0719 12:00:58.460598    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:58.460767    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-89hm2","generateName":"kube-proxy-","namespace":"kube-system","uid":"77b4b485-53f0-4480-8b62-a1df26f037b8","resourceVersion":"979","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0719 12:00:58.658537    4831 request.go:629] Waited for 197.37511ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:58.658716    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:58.658727    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:58.658738    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:58.658744    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:58.661417    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:58.661434    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:58.661441    4831 round_trippers.go:580]     Audit-Id: 7126b032-d2be-4749-a3a5-c0204a3449bc
	I0719 12:00:58.661446    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:58.661466    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:58.661478    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:58.661482    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:58.661491    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:58 GMT
	I0719 12:00:58.661584    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:58.661831    4831 pod_ready.go:97] node "multinode-871000" hosting pod "kube-proxy-89hm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:58.661846    4831 pod_ready.go:81] duration metric: took 401.647379ms for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:58.661856    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "kube-proxy-89hm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:58.661872    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:58.856578    4831 request.go:629] Waited for 194.656885ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:00:58.856687    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:00:58.856697    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:58.856709    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:58.856717    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:58.859522    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:58.859539    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:58.859546    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:58.859552    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:58.859557    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:58.859561    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:58.859564    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:59 GMT
	I0719 12:00:58.859568    4831 round_trippers.go:580]     Audit-Id: d6a6906a-da0f-40ca-81be-6c8c66da5cb5
	I0719 12:00:58.859682    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t9bqq","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ef191fc-6e2e-486c-b825-76c6e0d95416","resourceVersion":"523","creationTimestamp":"2024-07-19T18:56:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:56:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0719 12:00:59.058021    4831 request.go:629] Waited for 197.993839ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:00:59.058176    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:00:59.058187    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:59.058198    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:59.058206    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:59.060916    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:59.060937    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:59.060948    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:59.060954    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:59.060958    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:59 GMT
	I0719 12:00:59.060965    4831 round_trippers.go:580]     Audit-Id: b3383d03-869e-4dc0-865c-296bb6ac6bba
	I0719 12:00:59.060970    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:59.060976    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:59.061476    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"e0450b58-f42e-4eee-a22b-05f89b4b721d","resourceVersion":"589","creationTimestamp":"2024-07-19T18:56:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T11_56_14_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:56:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0719 12:00:59.061730    4831 pod_ready.go:92] pod "kube-proxy-t9bqq" in "kube-system" namespace has status "Ready":"True"
	I0719 12:00:59.061742    4831 pod_ready.go:81] duration metric: took 399.862582ms for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:59.061752    4831 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:00:59.258570    4831 request.go:629] Waited for 196.730856ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:00:59.258699    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:00:59.258709    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:59.258720    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:59.258725    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:59.261370    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:59.261382    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:59.261389    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:59 GMT
	I0719 12:00:59.261418    4831 round_trippers.go:580]     Audit-Id: 23659b5b-7026-4753-9b67-8bd41b92b47d
	I0719 12:00:59.261466    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:59.261482    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:59.261488    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:59.261494    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:59.261864    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-871000","namespace":"kube-system","uid":"0d73182a-0458-470e-ac06-ccde27fa113a","resourceVersion":"948","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.mirror":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.seen":"2024-07-19T18:55:00.040869314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0719 12:00:59.456844    4831 request.go:629] Waited for 194.664848ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:59.456973    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:59.456982    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:59.456995    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:59.457004    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:59.459649    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:00:59.459664    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:59.459671    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:59.459675    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:59 GMT
	I0719 12:00:59.459679    4831 round_trippers.go:580]     Audit-Id: 5c5adbfc-9e7b-4172-b121-c2d1431e9d6d
	I0719 12:00:59.459682    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:59.459685    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:59.459688    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:59.459804    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:00:59.460084    4831 pod_ready.go:97] node "multinode-871000" hosting pod "kube-scheduler-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:59.460102    4831 pod_ready.go:81] duration metric: took 398.345213ms for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	E0719 12:00:59.460111    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000" hosting pod "kube-scheduler-multinode-871000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000" has status "Ready":"False"
	I0719 12:00:59.460120    4831 pod_ready.go:38] duration metric: took 1.74137125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:00:59.460137    4831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 12:00:59.470055    4831 command_runner.go:130] > -16
	I0719 12:00:59.470100    4831 ops.go:34] apiserver oom_adj: -16
	I0719 12:00:59.470108    4831 kubeadm.go:597] duration metric: took 8.628745324s to restartPrimaryControlPlane
	I0719 12:00:59.470115    4831 kubeadm.go:394] duration metric: took 8.648588625s to StartCluster
	I0719 12:00:59.470125    4831 settings.go:142] acquiring lock: {Name:mk32b18012e36d8300f16bafebdd450435b306a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:00:59.470229    4831 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:00:59.470588    4831 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1053/kubeconfig: {Name:mk7cfae7eb77889432abd85178928820b2e794ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:00:59.470958    4831 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:00:59.470985    4831 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 12:00:59.471122    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:00:59.494234    4831 out.go:177] * Verifying Kubernetes components...
	I0719 12:00:59.537235    4831 out.go:177] * Enabled addons: 
	I0719 12:00:59.558133    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:00:59.579095    4831 addons.go:510] duration metric: took 108.11473ms for enable addons: enabled=[]
	I0719 12:00:59.695256    4831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:00:59.706084    4831 node_ready.go:35] waiting up to 6m0s for node "multinode-871000" to be "Ready" ...
	I0719 12:00:59.706137    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:00:59.706143    4831 round_trippers.go:469] Request Headers:
	I0719 12:00:59.706149    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:00:59.706154    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:00:59.707867    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:00:59.707877    4831 round_trippers.go:577] Response Headers:
	I0719 12:00:59.707886    4831 round_trippers.go:580]     Audit-Id: 66c66110-fcff-40f5-8e0d-e068bc010762
	I0719 12:00:59.707889    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:00:59.707894    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:00:59.707896    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:00:59.707898    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:00:59.707901    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:00:59 GMT
	I0719 12:00:59.708039    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:00.206691    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:00.206718    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:00.206730    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:00.206736    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:00.208920    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:00.208933    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:00.208941    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:00 GMT
	I0719 12:01:00.208946    4831 round_trippers.go:580]     Audit-Id: 0a129778-1996-40bb-a48c-fa0ac4c803b1
	I0719 12:01:00.208949    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:00.208953    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:00.208956    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:00.208959    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:00.209110    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:00.706969    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:00.706993    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:00.707005    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:00.707011    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:00.709702    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:00.709718    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:00.709725    4831 round_trippers.go:580]     Audit-Id: 236326f2-e9d0-4a0e-b41f-807eb6b67134
	I0719 12:01:00.709729    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:00.709773    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:00.709781    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:00.709786    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:00.709790    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:00 GMT
	I0719 12:01:00.710141    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:01.206241    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:01.206252    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:01.206258    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:01.206261    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:01.208805    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:01.208818    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:01.208826    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:01.208832    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:01.208836    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:01 GMT
	I0719 12:01:01.208840    4831 round_trippers.go:580]     Audit-Id: 432acf5e-037c-436b-b152-26648b7bb65c
	I0719 12:01:01.208844    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:01.208847    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:01.209124    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:01.706530    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:01.706547    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:01.706556    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:01.706561    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:01.708554    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:01.708566    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:01.708573    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:01.708580    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:01.708584    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:01 GMT
	I0719 12:01:01.708587    4831 round_trippers.go:580]     Audit-Id: 0f065534-6fb4-4385-9ea9-66267e61e0d7
	I0719 12:01:01.708591    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:01.708594    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:01.708774    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:01.708958    4831 node_ready.go:53] node "multinode-871000" has status "Ready":"False"
	I0719 12:01:02.206706    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:02.206726    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:02.206737    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:02.206746    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:02.209339    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:02.209361    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:02.209374    4831 round_trippers.go:580]     Audit-Id: e65067d5-67e9-4674-9522-e48215ef9e7b
	I0719 12:01:02.209381    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:02.209390    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:02.209399    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:02.209408    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:02.209416    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:02 GMT
	I0719 12:01:02.209648    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:02.707234    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:02.707259    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:02.707268    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:02.707273    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:02.710126    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:02.710141    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:02.710149    4831 round_trippers.go:580]     Audit-Id: 3b8ad13c-1ba8-4ec4-8bdc-3c4a7a5b8576
	I0719 12:01:02.710153    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:02.710156    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:02.710159    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:02.710163    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:02.710167    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:02 GMT
	I0719 12:01:02.710634    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:03.206851    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:03.206865    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:03.206871    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:03.206874    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:03.209244    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:03.209255    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:03.209261    4831 round_trippers.go:580]     Audit-Id: 3d163c89-fc3b-4ad8-81fc-eddd33e8b795
	I0719 12:01:03.209264    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:03.209267    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:03.209269    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:03.209272    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:03.209275    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:03 GMT
	I0719 12:01:03.209364    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:03.706624    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:03.706642    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:03.706650    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:03.706655    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:03.708618    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:03.708627    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:03.708633    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:03.708635    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:03 GMT
	I0719 12:01:03.708639    4831 round_trippers.go:580]     Audit-Id: 95e29de8-1653-4a86-92d1-72bd48dd939e
	I0719 12:01:03.708643    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:03.708645    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:03.708648    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:03.708947    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:03.709130    4831 node_ready.go:53] node "multinode-871000" has status "Ready":"False"
	I0719 12:01:04.207051    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:04.207072    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:04.207083    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:04.207090    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:04.209502    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:04.209520    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:04.209528    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:04 GMT
	I0719 12:01:04.209534    4831 round_trippers.go:580]     Audit-Id: 96033790-6d25-4b96-b7c4-29046f0224b4
	I0719 12:01:04.209537    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:04.209540    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:04.209544    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:04.209548    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:04.209618    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:04.707394    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:04.707419    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:04.707431    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:04.707436    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:04.710162    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:04.710180    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:04.710189    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:04.710196    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:04.710202    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:04.710207    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:04.710211    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:04 GMT
	I0719 12:01:04.710214    4831 round_trippers.go:580]     Audit-Id: 08bb4bbb-8b81-4c21-9276-2a88c11ad6ec
	I0719 12:01:04.710525    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:05.206378    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:05.206394    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:05.206401    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:05.206407    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:05.208145    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:05.208158    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:05.208165    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:05.208170    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:05.208178    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:05 GMT
	I0719 12:01:05.208182    4831 round_trippers.go:580]     Audit-Id: 109a3915-a328-4b3a-992b-103724858fb0
	I0719 12:01:05.208186    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:05.208191    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:05.208687    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:05.706790    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:05.706815    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:05.706826    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:05.706831    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:05.709511    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:05.709529    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:05.709540    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:05.709548    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:05.709556    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:05.709560    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:05.709564    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:05 GMT
	I0719 12:01:05.709569    4831 round_trippers.go:580]     Audit-Id: a76ab312-c8c9-4752-9286-ae31851fbdf8
	I0719 12:01:05.709652    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:05.709899    4831 node_ready.go:53] node "multinode-871000" has status "Ready":"False"
	I0719 12:01:06.207255    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:06.207275    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:06.207287    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:06.207292    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:06.209859    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:06.209872    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:06.209879    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:06.209911    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:06.209920    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:06.209926    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:06.209948    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:06 GMT
	I0719 12:01:06.209962    4831 round_trippers.go:580]     Audit-Id: a2098aa3-78e4-466d-b079-2e04d7f652cd
	I0719 12:01:06.210272    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:06.706413    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:06.706437    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:06.706450    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:06.706456    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:06.709140    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:06.709158    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:06.709166    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:06.709170    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:06.709175    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:06.709180    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:06.709183    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:06 GMT
	I0719 12:01:06.709187    4831 round_trippers.go:580]     Audit-Id: 8f00113f-9166-48de-8a53-a97b4e7caff2
	I0719 12:01:06.709601    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:07.206594    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:07.206616    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:07.206627    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:07.206634    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:07.209777    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:01:07.209791    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:07.209798    4831 round_trippers.go:580]     Audit-Id: 40b9a91a-6fa8-417a-8e05-b10024d49aa9
	I0719 12:01:07.209804    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:07.209809    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:07.209814    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:07.209818    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:07.209822    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:07 GMT
	I0719 12:01:07.209922    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:07.706628    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:07.706647    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:07.706658    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:07.706664    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:07.708842    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:07.708855    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:07.708862    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:07.708867    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:07.708870    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:07.708874    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:07 GMT
	I0719 12:01:07.708878    4831 round_trippers.go:580]     Audit-Id: f6d0c2a9-45f4-4321-ad0c-ab21af2da3e2
	I0719 12:01:07.708884    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:07.708952    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:08.206496    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:08.206517    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:08.206530    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:08.206536    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:08.208874    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:08.208889    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:08.208899    4831 round_trippers.go:580]     Audit-Id: 61f21ffb-7227-499e-88a8-7f21eb34b247
	I0719 12:01:08.208905    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:08.208909    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:08.208912    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:08.208916    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:08.208920    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:08 GMT
	I0719 12:01:08.209075    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"902","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0719 12:01:08.209318    4831 node_ready.go:53] node "multinode-871000" has status "Ready":"False"
	I0719 12:01:08.707163    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:08.707183    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:08.707196    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:08.707204    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:08.709857    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:08.709873    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:08.709881    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:08 GMT
	I0719 12:01:08.709886    4831 round_trippers.go:580]     Audit-Id: 78633b24-3b54-472f-954b-ec23aa1dd09f
	I0719 12:01:08.709910    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:08.709922    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:08.709926    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:08.709930    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:08.710037    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1015","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0719 12:01:09.207243    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:09.207266    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:09.207277    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:09.207284    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:09.210073    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:09.210091    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:09.210099    4831 round_trippers.go:580]     Audit-Id: 3f41916e-be2c-4c7b-833a-e2f5466f4060
	I0719 12:01:09.210104    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:09.210109    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:09.210113    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:09.210118    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:09.210123    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:09 GMT
	I0719 12:01:09.210188    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1015","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0719 12:01:09.706282    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:09.706298    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:09.706306    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:09.706312    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:09.708273    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:09.708283    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:09.708290    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:09.708295    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:09.708299    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:09.708314    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:09.708322    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:09 GMT
	I0719 12:01:09.708326    4831 round_trippers.go:580]     Audit-Id: 90bdf03f-63c9-4f70-9629-4e4c7bd07af9
	I0719 12:01:09.708452    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1015","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0719 12:01:10.207338    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:10.207354    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.207364    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.207368    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.282827    4831 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0719 12:01:10.282849    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.282859    4831 round_trippers.go:580]     Audit-Id: caf4fac1-a83a-4bb8-be8e-6f22825003d9
	I0719 12:01:10.282865    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.282870    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.282877    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.282883    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.282909    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.283170    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:10.283427    4831 node_ready.go:49] node "multinode-871000" has status "Ready":"True"
	I0719 12:01:10.283445    4831 node_ready.go:38] duration metric: took 10.577375784s for node "multinode-871000" to be "Ready" ...
	I0719 12:01:10.283454    4831 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:01:10.283500    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:10.283507    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.283515    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.283521    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.287540    4831 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 12:01:10.287549    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.287554    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.287558    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.287561    4831 round_trippers.go:580]     Audit-Id: d2878a28-80a5-4830-8db4-c96d17edd26d
	I0719 12:01:10.287564    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.287567    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.287570    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.288724    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1022"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 86042 chars]
	I0719 12:01:10.290530    4831 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:10.290574    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:10.290579    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.290585    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.290589    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.292152    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:10.292163    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.292170    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.292175    4831 round_trippers.go:580]     Audit-Id: c56623d3-29a1-45a1-886e-015c36c704fc
	I0719 12:01:10.292180    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.292184    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.292188    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.292214    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.292309    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:01:10.292534    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:10.292541    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.292546    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.292550    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.297542    4831 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 12:01:10.297551    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.297555    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.297558    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.297562    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.297564    4831 round_trippers.go:580]     Audit-Id: cb0f4ae4-8d95-449d-a991-640c29f4a119
	I0719 12:01:10.297582    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.297589    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.297816    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:10.790876    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:10.790896    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.790908    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.790914    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.795665    4831 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 12:01:10.795675    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.795680    4831 round_trippers.go:580]     Audit-Id: 0f6d05cb-37d4-4bf4-ace8-24384db5dcdd
	I0719 12:01:10.795683    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.795686    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.795689    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.795692    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.795695    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.796103    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:01:10.796386    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:10.796393    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:10.796399    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:10.796404    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:10.798239    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:10.798251    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:10.798257    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:10 GMT
	I0719 12:01:10.798260    4831 round_trippers.go:580]     Audit-Id: ada4a7cd-a686-4a06-b343-92e36440b9bb
	I0719 12:01:10.798263    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:10.798266    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:10.798268    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:10.798271    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:10.798361    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:11.290965    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:11.290985    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:11.290997    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:11.291005    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:11.293891    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:11.293905    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:11.293912    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:11.293916    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:11.293922    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:11.293926    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:11.293930    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:11 GMT
	I0719 12:01:11.293934    4831 round_trippers.go:580]     Audit-Id: db7a77a9-a64e-464d-a9bc-c5bd2e6ba8ef
	I0719 12:01:11.294075    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:01:11.294457    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:11.294467    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:11.294475    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:11.294480    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:11.295889    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:11.295897    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:11.295904    4831 round_trippers.go:580]     Audit-Id: 650d7f12-0ae4-42e4-9a1b-faca3f71edb1
	I0719 12:01:11.295909    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:11.295913    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:11.295916    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:11.295922    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:11.295937    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:11 GMT
	I0719 12:01:11.296121    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:11.791088    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:11.791116    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:11.791128    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:11.791134    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:11.794310    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:01:11.794325    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:11.794332    4831 round_trippers.go:580]     Audit-Id: 10429346-c51d-4881-9730-d05f7fad3d89
	I0719 12:01:11.794338    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:11.794342    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:11.794347    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:11.794351    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:11.794355    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:11 GMT
	I0719 12:01:11.794462    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:01:11.794822    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:11.794831    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:11.794839    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:11.794843    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:11.796226    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:11.796237    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:11.796244    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:11.796269    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:11.796283    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:11.796291    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:11.796296    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:11 GMT
	I0719 12:01:11.796301    4831 round_trippers.go:580]     Audit-Id: 391906f7-97f9-4d14-af6d-6a2218ad5f6e
	I0719 12:01:11.796511    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.291216    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:12.291230    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.291236    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.291239    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.292860    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.292871    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.292879    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.292885    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.292890    4831 round_trippers.go:580]     Audit-Id: 0cf802fe-2359-4e52-8dd2-e0cedb5bd98d
	I0719 12:01:12.292897    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.292901    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.292904    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.293071    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"947","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0719 12:01:12.293358    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:12.293365    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.293371    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.293374    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.294500    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.294511    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.294518    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.294523    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.294527    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.294551    4831 round_trippers.go:580]     Audit-Id: 9e735d3b-437d-43f5-8d0a-c6ee0f179e73
	I0719 12:01:12.294564    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.294567    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.294747    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.294928    4831 pod_ready.go:102] pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace has status "Ready":"False"
	I0719 12:01:12.790762    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:12.790786    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.790871    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.790879    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.793624    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:12.793636    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.793644    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.793649    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.793654    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.793660    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.793664    4831 round_trippers.go:580]     Audit-Id: 0045a688-0708-4e3c-be61-8812c76c6f1d
	I0719 12:01:12.793668    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.794110    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"1037","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0719 12:01:12.794469    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:12.794479    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.794487    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.794494    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.795667    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.795678    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.795685    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.795688    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.795691    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.795697    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.795700    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.795703    4831 round_trippers.go:580]     Audit-Id: b100dbbf-f10d-44a9-ad86-a6a01c66e107
	I0719 12:01:12.795998    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.796168    4831 pod_ready.go:92] pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:12.796177    4831 pod_ready.go:81] duration metric: took 2.505644651s for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.796184    4831 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.796215    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-871000
	I0719 12:01:12.796220    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.796225    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.796228    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.797357    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.797363    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.797369    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.797375    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.797379    4831 round_trippers.go:580]     Audit-Id: 387b09b2-27b5-487f-b615-79b935091495
	I0719 12:01:12.797383    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.797386    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.797390    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.797504    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-871000","namespace":"kube-system","uid":"8818ed52-4b2d-4629-af02-b835e3cfa034","resourceVersion":"1020","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.mirror":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.seen":"2024-07-19T18:55:05.740545259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0719 12:01:12.797750    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:12.797756    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.797761    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.797765    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.798727    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:12.798735    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.798738    4831 round_trippers.go:580]     Audit-Id: b297bcdc-4feb-4b3f-bb2c-d6130a7fa690
	I0719 12:01:12.798744    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.798749    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.798754    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.798757    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.798760    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.798893    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.799060    4831 pod_ready.go:92] pod "etcd-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:12.799068    4831 pod_ready.go:81] duration metric: took 2.880399ms for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.799079    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.799110    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-871000
	I0719 12:01:12.799115    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.799121    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.799125    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.800133    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.800141    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.800146    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:12 GMT
	I0719 12:01:12.800151    4831 round_trippers.go:580]     Audit-Id: b5716c0f-c9c6-4af8-b292-df106b436d3f
	I0719 12:01:12.800154    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.800156    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.800159    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.800162    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.800362    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-871000","namespace":"kube-system","uid":"9f3fdf92-3cbd-411c-802e-cbbbe1b60d68","resourceVersion":"993","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.mirror":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548209Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0719 12:01:12.800583    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:12.800590    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.800596    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.800599    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.801792    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.801800    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.801806    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.801810    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.801815    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.801821    4831 round_trippers.go:580]     Audit-Id: e5d5104d-a8cb-4f54-ad9e-478edf166f20
	I0719 12:01:12.801824    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.801827    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.802112    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.802278    4831 pod_ready.go:92] pod "kube-apiserver-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:12.802286    4831 pod_ready.go:81] duration metric: took 3.202194ms for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.802292    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.802323    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-871000
	I0719 12:01:12.802328    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.802333    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.802338    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.803424    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.803433    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.803440    4831 round_trippers.go:580]     Audit-Id: cf39fe9d-f2f6-4e8c-9b66-91c57ad62fd7
	I0719 12:01:12.803447    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.803452    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.803457    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.803461    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.803463    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.803593    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-871000","namespace":"kube-system","uid":"74e143fb-26b8-4d1d-b07a-f1b2c590133f","resourceVersion":"1003","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.mirror":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548943Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0719 12:01:12.803813    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:12.803821    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.803827    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.803831    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.804772    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:12.804778    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.804783    4831 round_trippers.go:580]     Audit-Id: b38cc3fb-2b02-4389-983c-73b9cdbaf280
	I0719 12:01:12.804786    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.804789    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.804792    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.804794    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.804797    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.804917    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:12.805076    4831 pod_ready.go:92] pod "kube-controller-manager-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:12.805084    4831 pod_ready.go:81] duration metric: took 2.786528ms for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.805091    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.805129    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-86ssb
	I0719 12:01:12.805134    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.805140    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.805144    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.806159    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:12.806165    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.806170    4831 round_trippers.go:580]     Audit-Id: b9f9807c-33dc-45f2-a9ee-5b2429b13d2f
	I0719 12:01:12.806173    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.806175    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.806177    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.806195    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.806199    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.806337    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-86ssb","generateName":"kube-proxy-","namespace":"kube-system","uid":"37609942-98d8-4c6b-b339-53bf3a901e3f","resourceVersion":"862","creationTimestamp":"2024-07-19T18:57:03Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0719 12:01:12.806563    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m03
	I0719 12:01:12.806570    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.806575    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.806577    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.807514    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:12.807521    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.807526    4831 round_trippers.go:580]     Audit-Id: c12be27b-efb2-4b62-b73b-fede6b2d8f0d
	I0719 12:01:12.807529    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.807532    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.807534    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.807538    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.807541    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.807657    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m03","uid":"4745805a-e01a-4411-b942-abcd092662c6","resourceVersion":"889","creationTimestamp":"2024-07-19T18:59:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T11_59_53_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:59:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3641 chars]
	I0719 12:01:12.807802    4831 pod_ready.go:92] pod "kube-proxy-86ssb" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:12.807809    4831 pod_ready.go:81] duration metric: took 2.713615ms for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.807816    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:12.992186    4831 request.go:629] Waited for 184.332224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:01:12.992306    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:01:12.992316    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:12.992325    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:12.992331    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:12.994688    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:12.994702    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:12.994712    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:12.994717    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:12.994721    4831 round_trippers.go:580]     Audit-Id: 0bac3d21-7e17-46c8-bc0e-3cd668703a12
	I0719 12:01:12.994725    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:12.994729    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:12.994734    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:12.994885    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-89hm2","generateName":"kube-proxy-","namespace":"kube-system","uid":"77b4b485-53f0-4480-8b62-a1df26f037b8","resourceVersion":"979","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0719 12:01:13.191224    4831 request.go:629] Waited for 195.982643ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:13.191334    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:13.191346    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:13.191357    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:13.191367    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:13.194050    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:13.194067    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:13.194075    4831 round_trippers.go:580]     Audit-Id: b8c02717-0677-4f98-b81c-a32c519ebf7f
	I0719 12:01:13.194079    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:13.194082    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:13.194086    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:13.194090    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:13.194093    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:13.194461    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1022","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0719 12:01:13.194710    4831 pod_ready.go:92] pod "kube-proxy-89hm2" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:13.194722    4831 pod_ready.go:81] duration metric: took 386.901578ms for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:13.194732    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:13.391733    4831 request.go:629] Waited for 196.93835ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:01:13.391899    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:01:13.391910    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:13.391920    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:13.391925    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:13.394807    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:13.394822    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:13.394830    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:13.394834    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:13.394839    4831 round_trippers.go:580]     Audit-Id: 3bc89593-7e91-45ac-abd2-9679c98d2d42
	I0719 12:01:13.394842    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:13.394846    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:13.394849    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:13.394917    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t9bqq","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ef191fc-6e2e-486c-b825-76c6e0d95416","resourceVersion":"523","creationTimestamp":"2024-07-19T18:56:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:56:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0719 12:01:13.591577    4831 request.go:629] Waited for 196.343729ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:13.591658    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:13.591664    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:13.591670    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:13.591674    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:13.593173    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:13.593183    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:13.593188    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:13.593191    4831 round_trippers.go:580]     Audit-Id: 522e877a-9212-412f-a1c2-e249824e8f02
	I0719 12:01:13.593194    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:13.593197    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:13.593200    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:13.593203    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:13.593271    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"e0450b58-f42e-4eee-a22b-05f89b4b721d","resourceVersion":"589","creationTimestamp":"2024-07-19T18:56:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T11_56_14_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:56:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0719 12:01:13.593446    4831 pod_ready.go:92] pod "kube-proxy-t9bqq" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:13.593454    4831 pod_ready.go:81] duration metric: took 398.717667ms for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:13.593461    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:13.791188    4831 request.go:629] Waited for 197.685056ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:01:13.791332    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:01:13.791342    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:13.791353    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:13.791360    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:13.794105    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:13.794117    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:13.794124    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:13.794132    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:13 GMT
	I0719 12:01:13.794138    4831 round_trippers.go:580]     Audit-Id: f030cc1f-b164-4ff8-b0ab-d1a4c9277014
	I0719 12:01:13.794161    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:13.794171    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:13.794179    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:13.794371    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-871000","namespace":"kube-system","uid":"0d73182a-0458-470e-ac06-ccde27fa113a","resourceVersion":"1012","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.mirror":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.seen":"2024-07-19T18:55:00.040869314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0719 12:01:13.991410    4831 request.go:629] Waited for 196.656272ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:13.991481    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:13.991492    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:13.991505    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:13.991512    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:13.994043    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:13.994058    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:13.994065    4831 round_trippers.go:580]     Audit-Id: 80eadb07-ea7b-4672-8daa-303e15c367f0
	I0719 12:01:13.994108    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:13.994115    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:13.994119    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:13.994124    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:13.994128    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:13.994199    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:13.994444    4831 pod_ready.go:92] pod "kube-scheduler-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:13.994456    4831 pod_ready.go:81] duration metric: took 400.990327ms for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:13.994465    4831 pod_ready.go:38] duration metric: took 3.711014187s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:01:13.994480    4831 api_server.go:52] waiting for apiserver process to appear ...
	I0719 12:01:13.994566    4831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:01:14.007526    4831 command_runner.go:130] > 1608
	I0719 12:01:14.007547    4831 api_server.go:72] duration metric: took 14.536621271s to wait for apiserver process to appear ...
	I0719 12:01:14.007554    4831 api_server.go:88] waiting for apiserver healthz status ...
	I0719 12:01:14.007564    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:01:14.011279    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0719 12:01:14.011309    4831 round_trippers.go:463] GET https://192.169.0.16:8443/version
	I0719 12:01:14.011314    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:14.011320    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:14.011325    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:14.011853    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:14.011863    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:14.011869    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:14.011873    4831 round_trippers.go:580]     Audit-Id: f1f852e0-9756-4fad-8aa2-5050cf2e389f
	I0719 12:01:14.011877    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:14.011880    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:14.011883    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:14.011887    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:14.011890    4831 round_trippers.go:580]     Content-Length: 263
	I0719 12:01:14.011899    4831 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0719 12:01:14.011920    4831 api_server.go:141] control plane version: v1.30.3
	I0719 12:01:14.011927    4831 api_server.go:131] duration metric: took 4.369782ms to wait for apiserver health ...
	I0719 12:01:14.011933    4831 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 12:01:14.192306    4831 request.go:629] Waited for 180.314222ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:14.192382    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:14.192472    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:14.192486    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:14.192494    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:14.196552    4831 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 12:01:14.196568    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:14.196580    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:14.196587    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:14.196593    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:14.196615    4831 round_trippers.go:580]     Audit-Id: 5cc0f340-9362-4237-9188-a424d0f8a1de
	I0719 12:01:14.196630    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:14.196640    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:14.197378    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1045"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"1037","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85990 chars]
	I0719 12:01:14.199216    4831 system_pods.go:59] 12 kube-system pods found
	I0719 12:01:14.199226    4831 system_pods.go:61] "coredns-7db6d8ff4d-85r26" [c7d62ec5-693b-46ab-9437-86aef8b469e8] Running
	I0719 12:01:14.199233    4831 system_pods.go:61] "etcd-multinode-871000" [8818ed52-4b2d-4629-af02-b835e3cfa034] Running
	I0719 12:01:14.199237    4831 system_pods.go:61] "kindnet-4stbd" [58fb2d63-07bb-4a27-87c5-4e259083f5be] Running
	I0719 12:01:14.199240    4831 system_pods.go:61] "kindnet-897rz" [a3c96d7b-9aa1-49e1-9fa6-8aad9551be4f] Running
	I0719 12:01:14.199243    4831 system_pods.go:61] "kindnet-hht5h" [f1a7b402-0cf3-469c-8124-6b53aa34f4c7] Running
	I0719 12:01:14.199245    4831 system_pods.go:61] "kube-apiserver-multinode-871000" [9f3fdf92-3cbd-411c-802e-cbbbe1b60d68] Running
	I0719 12:01:14.199248    4831 system_pods.go:61] "kube-controller-manager-multinode-871000" [74e143fb-26b8-4d1d-b07a-f1b2c590133f] Running
	I0719 12:01:14.199251    4831 system_pods.go:61] "kube-proxy-86ssb" [37609942-98d8-4c6b-b339-53bf3a901e3f] Running
	I0719 12:01:14.199253    4831 system_pods.go:61] "kube-proxy-89hm2" [77b4b485-53f0-4480-8b62-a1df26f037b8] Running
	I0719 12:01:14.199255    4831 system_pods.go:61] "kube-proxy-t9bqq" [5ef191fc-6e2e-486c-b825-76c6e0d95416] Running
	I0719 12:01:14.199258    4831 system_pods.go:61] "kube-scheduler-multinode-871000" [0d73182a-0458-470e-ac06-ccde27fa113a] Running
	I0719 12:01:14.199261    4831 system_pods.go:61] "storage-provisioner" [ccd0aaec-abf0-4aec-9ebf-14f619510aeb] Running
	I0719 12:01:14.199265    4831 system_pods.go:74] duration metric: took 187.329082ms to wait for pod list to return data ...
	I0719 12:01:14.199270    4831 default_sa.go:34] waiting for default service account to be created ...
	I0719 12:01:14.391741    4831 request.go:629] Waited for 192.421328ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0719 12:01:14.391843    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/default/serviceaccounts
	I0719 12:01:14.391859    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:14.391895    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:14.391906    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:14.394636    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:14.394649    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:14.394656    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:14.394682    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:14.394691    4831 round_trippers.go:580]     Content-Length: 262
	I0719 12:01:14.394695    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:14.394700    4831 round_trippers.go:580]     Audit-Id: 6b235191-1b10-4814-9114-175c0be567bc
	I0719 12:01:14.394703    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:14.394707    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:14.394720    4831 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1045"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ccdcd62c-500a-4785-b87e-b6abf5989afc","resourceVersion":"363","creationTimestamp":"2024-07-19T18:55:20Z"}}]}
	I0719 12:01:14.394861    4831 default_sa.go:45] found service account: "default"
	I0719 12:01:14.394873    4831 default_sa.go:55] duration metric: took 195.598234ms for default service account to be created ...
	I0719 12:01:14.394879    4831 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 12:01:14.590815    4831 request.go:629] Waited for 195.900423ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:14.590861    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:14.590866    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:14.590872    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:14.590877    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:14.594875    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:01:14.594907    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:14.594915    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:14.594918    4831 round_trippers.go:580]     Audit-Id: 03c851a1-add9-4157-91e1-7326271475b6
	I0719 12:01:14.594920    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:14.594923    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:14.594926    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:14.594928    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:14.596077    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1045"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"1037","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85990 chars]
	I0719 12:01:14.597907    4831 system_pods.go:86] 12 kube-system pods found
	I0719 12:01:14.597917    4831 system_pods.go:89] "coredns-7db6d8ff4d-85r26" [c7d62ec5-693b-46ab-9437-86aef8b469e8] Running
	I0719 12:01:14.597922    4831 system_pods.go:89] "etcd-multinode-871000" [8818ed52-4b2d-4629-af02-b835e3cfa034] Running
	I0719 12:01:14.597926    4831 system_pods.go:89] "kindnet-4stbd" [58fb2d63-07bb-4a27-87c5-4e259083f5be] Running
	I0719 12:01:14.597929    4831 system_pods.go:89] "kindnet-897rz" [a3c96d7b-9aa1-49e1-9fa6-8aad9551be4f] Running
	I0719 12:01:14.597933    4831 system_pods.go:89] "kindnet-hht5h" [f1a7b402-0cf3-469c-8124-6b53aa34f4c7] Running
	I0719 12:01:14.597936    4831 system_pods.go:89] "kube-apiserver-multinode-871000" [9f3fdf92-3cbd-411c-802e-cbbbe1b60d68] Running
	I0719 12:01:14.597941    4831 system_pods.go:89] "kube-controller-manager-multinode-871000" [74e143fb-26b8-4d1d-b07a-f1b2c590133f] Running
	I0719 12:01:14.597944    4831 system_pods.go:89] "kube-proxy-86ssb" [37609942-98d8-4c6b-b339-53bf3a901e3f] Running
	I0719 12:01:14.597948    4831 system_pods.go:89] "kube-proxy-89hm2" [77b4b485-53f0-4480-8b62-a1df26f037b8] Running
	I0719 12:01:14.597951    4831 system_pods.go:89] "kube-proxy-t9bqq" [5ef191fc-6e2e-486c-b825-76c6e0d95416] Running
	I0719 12:01:14.597955    4831 system_pods.go:89] "kube-scheduler-multinode-871000" [0d73182a-0458-470e-ac06-ccde27fa113a] Running
	I0719 12:01:14.597958    4831 system_pods.go:89] "storage-provisioner" [ccd0aaec-abf0-4aec-9ebf-14f619510aeb] Running
	I0719 12:01:14.597963    4831 system_pods.go:126] duration metric: took 203.07922ms to wait for k8s-apps to be running ...
	I0719 12:01:14.597968    4831 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 12:01:14.598021    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 12:01:14.610118    4831 system_svc.go:56] duration metric: took 12.144912ms WaitForService to wait for kubelet
	I0719 12:01:14.610131    4831 kubeadm.go:582] duration metric: took 15.139206631s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:01:14.610143    4831 node_conditions.go:102] verifying NodePressure condition ...
	I0719 12:01:14.790946    4831 request.go:629] Waited for 180.72086ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0719 12:01:14.790995    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0719 12:01:14.791003    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:14.791013    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:14.791021    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:14.793367    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:14.793382    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:14.793389    4831 round_trippers.go:580]     Audit-Id: 46f01e81-0ba1-4bdd-80b4-c4cfb8c76e66
	I0719 12:01:14.793395    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:14.793407    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:14.793412    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:14.793417    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:14.793420    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:14 GMT
	I0719 12:01:14.793548    4831 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1045"},"items":[{"metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14677 chars]
	I0719 12:01:14.793977    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:14.793988    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:14.794015    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:14.794018    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:14.794021    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:14.794030    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:14.794035    4831 node_conditions.go:105] duration metric: took 183.888782ms to run NodePressure ...
	I0719 12:01:14.794043    4831 start.go:241] waiting for startup goroutines ...
	I0719 12:01:14.794048    4831 start.go:246] waiting for cluster config update ...
	I0719 12:01:14.794054    4831 start.go:255] writing updated cluster config ...
	I0719 12:01:14.819775    4831 out.go:177] 
	I0719 12:01:14.839829    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:14.839957    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:14.862749    4831 out.go:177] * Starting "multinode-871000-m02" worker node in "multinode-871000" cluster
	I0719 12:01:14.904611    4831 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:01:14.904645    4831 cache.go:56] Caching tarball of preloaded images
	I0719 12:01:14.904836    4831 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 12:01:14.904854    4831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:01:14.904983    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:14.905875    4831 start.go:360] acquireMachinesLock for multinode-871000-m02: {Name:mk9f33e92e6d472bd2fb7a1dc1c9d72253ce59c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:01:14.905953    4831 start.go:364] duration metric: took 62.487µs to acquireMachinesLock for "multinode-871000-m02"
	I0719 12:01:14.905971    4831 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:01:14.905977    4831 fix.go:54] fixHost starting: m02
	I0719 12:01:14.906292    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:14.906309    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:14.915286    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53190
	I0719 12:01:14.915632    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:14.916090    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:14.916110    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:14.916371    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:14.916495    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:14.916591    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetState
	I0719 12:01:14.916684    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:14.916776    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid from json: 4223
	I0719 12:01:14.917680    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid 4223 missing from process table
	I0719 12:01:14.917704    4831 fix.go:112] recreateIfNeeded on multinode-871000-m02: state=Stopped err=<nil>
	I0719 12:01:14.917720    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	W0719 12:01:14.917799    4831 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:01:14.938957    4831 out.go:177] * Restarting existing hyperkit VM for "multinode-871000-m02" ...
	I0719 12:01:14.980789    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .Start
	I0719 12:01:14.981060    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:14.981135    4831 main.go:141] libmachine: (multinode-871000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/hyperkit.pid
	I0719 12:01:14.982852    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid 4223 missing from process table
	I0719 12:01:14.982872    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | pid 4223 is in state "Stopped"
	I0719 12:01:14.982892    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/hyperkit.pid...
	I0719 12:01:14.983111    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Using UUID 0156b6d9-fc48-4ae8-8601-a045f8c107f0
	I0719 12:01:15.009357    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Generated MAC 36:3f:5c:47:18:4c
	I0719 12:01:15.009376    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000
	I0719 12:01:15.009509    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0156b6d9-fc48-4ae8-8601-a045f8c107f0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acba0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0719 12:01:15.009548    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0156b6d9-fc48-4ae8-8601-a045f8c107f0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acba0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0719 12:01:15.009584    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0156b6d9-fc48-4ae8-8601-a045f8c107f0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/multinode-871000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/bzimage,/Users/j
enkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"}
	I0719 12:01:15.009629    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0156b6d9-fc48-4ae8-8601-a045f8c107f0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/multinode-871000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/bzimage,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/mult
inode-871000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"
	I0719 12:01:15.009640    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0719 12:01:15.010985    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 DEBUG: hyperkit: Pid is 4857
	I0719 12:01:15.011511    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Attempt 0
	I0719 12:01:15.011532    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:15.011608    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid from json: 4857
	I0719 12:01:15.013370    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Searching for 36:3f:5c:47:18:4c in /var/db/dhcpd_leases ...
	I0719 12:01:15.013439    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0719 12:01:15.013473    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:4c:c6:88:73:ec ID:1,f2:4c:c6:88:73:ec Lease:0x669c0959}
	I0719 12:01:15.013498    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:5e:a3:f5:89:e4:9e ID:1,5e:a3:f5:89:e4:9e Lease:0x669ab7be}
	I0719 12:01:15.013511    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:36:3f:5c:47:18:4c ID:1,36:3f:5c:47:18:4c Lease:0x669c0844}
	I0719 12:01:15.013532    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | Found match: 36:3f:5c:47:18:4c
	I0719 12:01:15.013549    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetConfigRaw
	I0719 12:01:15.013567    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | IP: 192.169.0.18
	I0719 12:01:15.014251    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetIP
	I0719 12:01:15.014429    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:15.014874    4831 machine.go:94] provisionDockerMachine start ...
	I0719 12:01:15.014884    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:15.014993    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:15.015109    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:15.015233    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:15.015390    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:15.015491    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:15.015629    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:15.015797    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:15.015805    4831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 12:01:15.019046    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0719 12:01:15.027408    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0719 12:01:15.028366    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:01:15.028394    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:01:15.028413    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:01:15.028431    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:01:15.407850    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0719 12:01:15.407881    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0719 12:01:15.522578    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:01:15.522610    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:01:15.522647    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:01:15.522666    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:01:15.523454    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0719 12:01:15.523463    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0719 12:01:20.789057    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0719 12:01:20.789103    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0719 12:01:20.789119    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0719 12:01:20.812558    4831 main.go:141] libmachine: (multinode-871000-m02) DBG | 2024/07/19 12:01:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0719 12:01:26.077593    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 12:01:26.077620    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetMachineName
	I0719 12:01:26.077758    4831 buildroot.go:166] provisioning hostname "multinode-871000-m02"
	I0719 12:01:26.077770    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetMachineName
	I0719 12:01:26.077854    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.077950    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.078032    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.078110    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.078209    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.078331    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:26.078487    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:26.078495    4831 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-871000-m02 && echo "multinode-871000-m02" | sudo tee /etc/hostname
	I0719 12:01:26.141205    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-871000-m02
	
	I0719 12:01:26.141220    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.141353    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.141448    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.141540    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.141624    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.141773    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:26.141918    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:26.141929    4831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-871000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-871000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-871000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 12:01:26.198278    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 12:01:26.198293    4831 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19307-1053/.minikube CaCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19307-1053/.minikube}
	I0719 12:01:26.198303    4831 buildroot.go:174] setting up certificates
	I0719 12:01:26.198311    4831 provision.go:84] configureAuth start
	I0719 12:01:26.198318    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetMachineName
	I0719 12:01:26.198456    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetIP
	I0719 12:01:26.198558    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.198640    4831 provision.go:143] copyHostCerts
	I0719 12:01:26.198668    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:01:26.198739    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem, removing ...
	I0719 12:01:26.198745    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:01:26.198894    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem (1078 bytes)
	I0719 12:01:26.199110    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:01:26.199156    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem, removing ...
	I0719 12:01:26.199161    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:01:26.199243    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem (1123 bytes)
	I0719 12:01:26.199401    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:01:26.199444    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem, removing ...
	I0719 12:01:26.199448    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:01:26.199527    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem (1675 bytes)
	I0719 12:01:26.199723    4831 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem org=jenkins.multinode-871000-m02 san=[127.0.0.1 192.169.0.18 localhost minikube multinode-871000-m02]
	I0719 12:01:26.273916    4831 provision.go:177] copyRemoteCerts
	I0719 12:01:26.274023    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 12:01:26.274064    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.274305    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.274464    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.274572    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.274695    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 12:01:26.306988    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 12:01:26.307065    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 12:01:26.326746    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 12:01:26.326814    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0719 12:01:26.346699    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 12:01:26.346769    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 12:01:26.366495    4831 provision.go:87] duration metric: took 168.170131ms to configureAuth
	I0719 12:01:26.366512    4831 buildroot.go:189] setting minikube options for container-runtime
	I0719 12:01:26.366695    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:26.366729    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:26.366857    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.366952    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.367039    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.367107    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.367195    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.367303    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:26.367432    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:26.367440    4831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 12:01:26.418614    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 12:01:26.418627    4831 buildroot.go:70] root file system type: tmpfs
	I0719 12:01:26.418710    4831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 12:01:26.418723    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.418852    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.418949    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.419039    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.419125    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.419272    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:26.419413    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:26.419458    4831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 12:01:26.480650    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 12:01:26.480669    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:26.480800    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:26.480882    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.480980    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:26.481075    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:26.481207    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:26.481350    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:26.481362    4831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 12:01:28.067920    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 12:01:28.067935    4831 machine.go:97] duration metric: took 13.053095328s to provisionDockerMachine
	I0719 12:01:28.067943    4831 start.go:293] postStartSetup for "multinode-871000-m02" (driver="hyperkit")
	I0719 12:01:28.067950    4831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 12:01:28.067960    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:28.068163    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 12:01:28.068176    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:28.068286    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:28.068373    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:28.068471    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:28.068569    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 12:01:28.110906    4831 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 12:01:28.115909    4831 command_runner.go:130] > NAME=Buildroot
	I0719 12:01:28.115920    4831 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 12:01:28.115924    4831 command_runner.go:130] > ID=buildroot
	I0719 12:01:28.115928    4831 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 12:01:28.115931    4831 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 12:01:28.115959    4831 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 12:01:28.115967    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/addons for local assets ...
	I0719 12:01:28.116068    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/files for local assets ...
	I0719 12:01:28.116252    4831 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> 15922.pem in /etc/ssl/certs
	I0719 12:01:28.116258    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> /etc/ssl/certs/15922.pem
	I0719 12:01:28.116464    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 12:01:28.125931    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem --> /etc/ssl/certs/15922.pem (1708 bytes)
	I0719 12:01:28.152964    4831 start.go:296] duration metric: took 85.012579ms for postStartSetup
	I0719 12:01:28.152986    4831 fix.go:56] duration metric: took 13.247051958s for fixHost
	I0719 12:01:28.153002    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:28.153136    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:28.153266    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:28.153362    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:28.153456    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:28.153586    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:28.153727    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0719 12:01:28.153734    4831 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 12:01:28.206346    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415688.390002539
	
	I0719 12:01:28.206357    4831 fix.go:216] guest clock: 1721415688.390002539
	I0719 12:01:28.206362    4831 fix.go:229] Guest: 2024-07-19 12:01:28.390002539 -0700 PDT Remote: 2024-07-19 12:01:28.152992 -0700 PDT m=+55.787755802 (delta=237.010539ms)
	I0719 12:01:28.206372    4831 fix.go:200] guest clock delta is within tolerance: 237.010539ms
	I0719 12:01:28.206376    4831 start.go:83] releasing machines lock for "multinode-871000-m02", held for 13.300458195s
	I0719 12:01:28.206393    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:28.206508    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetIP
	I0719 12:01:28.227092    4831 out.go:177] * Found network options:
	I0719 12:01:28.247879    4831 out.go:177]   - NO_PROXY=192.169.0.16
	W0719 12:01:28.270003    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 12:01:28.270061    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:28.270952    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:28.271221    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:28.271323    4831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 12:01:28.271368    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	W0719 12:01:28.271470    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 12:01:28.271569    4831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 12:01:28.271581    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:28.271591    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:28.271788    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:28.271826    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:28.272010    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:28.272025    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:28.272179    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 12:01:28.272206    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:28.272354    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 12:01:28.300976    4831 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 12:01:28.301000    4831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 12:01:28.301059    4831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 12:01:28.350887    4831 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 12:01:28.351698    4831 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 12:01:28.351720    4831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 12:01:28.351729    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:01:28.351804    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:01:28.366743    4831 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 12:01:28.367005    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 12:01:28.375738    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 12:01:28.384457    4831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 12:01:28.384505    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 12:01:28.393286    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:01:28.401942    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 12:01:28.410897    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:01:28.419464    4831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 12:01:28.428431    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 12:01:28.437254    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 12:01:28.445904    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 12:01:28.454819    4831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 12:01:28.462772    4831 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 12:01:28.462879    4831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 12:01:28.471061    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:28.570246    4831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 12:01:28.587401    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:01:28.587479    4831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 12:01:28.607311    4831 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 12:01:28.607781    4831 command_runner.go:130] > [Unit]
	I0719 12:01:28.607796    4831 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 12:01:28.607804    4831 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 12:01:28.607810    4831 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 12:01:28.607814    4831 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 12:01:28.607818    4831 command_runner.go:130] > StartLimitBurst=3
	I0719 12:01:28.607822    4831 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 12:01:28.607825    4831 command_runner.go:130] > [Service]
	I0719 12:01:28.607830    4831 command_runner.go:130] > Type=notify
	I0719 12:01:28.607833    4831 command_runner.go:130] > Restart=on-failure
	I0719 12:01:28.607837    4831 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16
	I0719 12:01:28.607843    4831 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 12:01:28.607854    4831 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 12:01:28.607861    4831 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 12:01:28.607866    4831 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 12:01:28.607872    4831 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 12:01:28.607887    4831 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 12:01:28.607899    4831 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 12:01:28.607912    4831 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 12:01:28.607918    4831 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 12:01:28.607922    4831 command_runner.go:130] > ExecStart=
	I0719 12:01:28.607940    4831 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0719 12:01:28.607945    4831 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 12:01:28.607952    4831 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 12:01:28.607958    4831 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 12:01:28.607961    4831 command_runner.go:130] > LimitNOFILE=infinity
	I0719 12:01:28.607967    4831 command_runner.go:130] > LimitNPROC=infinity
	I0719 12:01:28.607973    4831 command_runner.go:130] > LimitCORE=infinity
	I0719 12:01:28.608000    4831 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 12:01:28.608006    4831 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 12:01:28.608009    4831 command_runner.go:130] > TasksMax=infinity
	I0719 12:01:28.608013    4831 command_runner.go:130] > TimeoutStartSec=0
	I0719 12:01:28.608018    4831 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 12:01:28.608024    4831 command_runner.go:130] > Delegate=yes
	I0719 12:01:28.608029    4831 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 12:01:28.608037    4831 command_runner.go:130] > KillMode=process
	I0719 12:01:28.608041    4831 command_runner.go:130] > [Install]
	I0719 12:01:28.608045    4831 command_runner.go:130] > WantedBy=multi-user.target
	I0719 12:01:28.608155    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:01:28.620507    4831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 12:01:28.641361    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:01:28.652511    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:01:28.663646    4831 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 12:01:28.685363    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:01:28.696159    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:01:28.711054    4831 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 12:01:28.711285    4831 ssh_runner.go:195] Run: which cri-dockerd
	I0719 12:01:28.714140    4831 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 12:01:28.714324    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 12:01:28.721655    4831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 12:01:28.734998    4831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 12:01:28.834910    4831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 12:01:28.951493    4831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 12:01:28.951517    4831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 12:01:28.966896    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:29.068681    4831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 12:01:31.353955    4831 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.285262248s)
	I0719 12:01:31.354021    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 12:01:31.365222    4831 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 12:01:31.379213    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 12:01:31.390372    4831 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 12:01:31.491779    4831 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 12:01:31.583591    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:31.682747    4831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 12:01:31.696512    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 12:01:31.708413    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:31.803185    4831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 12:01:31.860134    4831 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 12:01:31.860208    4831 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 12:01:31.864363    4831 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 12:01:31.864377    4831 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 12:01:31.864382    4831 command_runner.go:130] > Device: 0,22	Inode: 770         Links: 1
	I0719 12:01:31.864387    4831 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0719 12:01:31.864391    4831 command_runner.go:130] > Access: 2024-07-19 19:01:32.000301231 +0000
	I0719 12:01:31.864402    4831 command_runner.go:130] > Modify: 2024-07-19 19:01:32.000301231 +0000
	I0719 12:01:31.864407    4831 command_runner.go:130] > Change: 2024-07-19 19:01:32.002301069 +0000
	I0719 12:01:31.864410    4831 command_runner.go:130] >  Birth: -
	I0719 12:01:31.864580    4831 start.go:563] Will wait 60s for crictl version
	I0719 12:01:31.864627    4831 ssh_runner.go:195] Run: which crictl
	I0719 12:01:31.867575    4831 command_runner.go:130] > /usr/bin/crictl
	I0719 12:01:31.867685    4831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 12:01:31.895735    4831 command_runner.go:130] > Version:  0.1.0
	I0719 12:01:31.895747    4831 command_runner.go:130] > RuntimeName:  docker
	I0719 12:01:31.895838    4831 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 12:01:31.895891    4831 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 12:01:31.897011    4831 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 12:01:31.897077    4831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 12:01:31.913433    4831 command_runner.go:130] > 27.0.3
	I0719 12:01:31.914376    4831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 12:01:31.930760    4831 command_runner.go:130] > 27.0.3
	I0719 12:01:31.952942    4831 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 12:01:31.974894    4831 out.go:177]   - env NO_PROXY=192.169.0.16
	I0719 12:01:31.995734    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetIP
	I0719 12:01:31.996154    4831 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0719 12:01:32.000442    4831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 12:01:32.010582    4831 mustload.go:65] Loading cluster: multinode-871000
	I0719 12:01:32.010758    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:32.010982    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.010997    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.019725    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53211
	I0719 12:01:32.020067    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.020413    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.020429    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.020621    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.020736    4831 main.go:141] libmachine: (multinode-871000) Calling .GetState
	I0719 12:01:32.020818    4831 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:32.020914    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid from json: 4843
	I0719 12:01:32.021858    4831 host.go:66] Checking if "multinode-871000" exists ...
	I0719 12:01:32.022124    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.022145    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.030713    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53213
	I0719 12:01:32.031056    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.031393    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.031404    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.031586    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.031700    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:01:32.031802    4831 certs.go:68] Setting up /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000 for IP: 192.169.0.18
	I0719 12:01:32.031808    4831 certs.go:194] generating shared ca certs ...
	I0719 12:01:32.031820    4831 certs.go:226] acquiring lock for ca certs: {Name:mk78732514e475c67b8a22bdfb9da389d614aef2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:01:32.031981    4831 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key
	I0719 12:01:32.032057    4831 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key
	I0719 12:01:32.032067    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 12:01:32.032088    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 12:01:32.032107    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 12:01:32.032125    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 12:01:32.032218    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem (1338 bytes)
	W0719 12:01:32.032269    4831 certs.go:480] ignoring /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592_empty.pem, impossibly tiny 0 bytes
	I0719 12:01:32.032280    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 12:01:32.032314    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem (1078 bytes)
	I0719 12:01:32.032349    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem (1123 bytes)
	I0719 12:01:32.032378    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem (1675 bytes)
	I0719 12:01:32.032472    4831 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem (1708 bytes)
	I0719 12:01:32.032507    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem -> /usr/share/ca-certificates/1592.pem
	I0719 12:01:32.032528    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> /usr/share/ca-certificates/15922.pem
	I0719 12:01:32.032551    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:01:32.032576    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 12:01:32.052233    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 12:01:32.071854    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 12:01:32.091255    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 12:01:32.110572    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/1592.pem --> /usr/share/ca-certificates/1592.pem (1338 bytes)
	I0719 12:01:32.129684    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem --> /usr/share/ca-certificates/15922.pem (1708 bytes)
	I0719 12:01:32.148789    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 12:01:32.167899    4831 ssh_runner.go:195] Run: openssl version
	I0719 12:01:32.171959    4831 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 12:01:32.172092    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1592.pem && ln -fs /usr/share/ca-certificates/1592.pem /etc/ssl/certs/1592.pem"
	I0719 12:01:32.181030    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1592.pem
	I0719 12:01:32.184302    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 18:22 /usr/share/ca-certificates/1592.pem
	I0719 12:01:32.184427    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:22 /usr/share/ca-certificates/1592.pem
	I0719 12:01:32.184466    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1592.pem
	I0719 12:01:32.188487    4831 command_runner.go:130] > 51391683
	I0719 12:01:32.188670    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1592.pem /etc/ssl/certs/51391683.0"
	I0719 12:01:32.197628    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15922.pem && ln -fs /usr/share/ca-certificates/15922.pem /etc/ssl/certs/15922.pem"
	I0719 12:01:32.206800    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15922.pem
	I0719 12:01:32.210082    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 18:22 /usr/share/ca-certificates/15922.pem
	I0719 12:01:32.210167    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:22 /usr/share/ca-certificates/15922.pem
	I0719 12:01:32.210220    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15922.pem
	I0719 12:01:32.214519    4831 command_runner.go:130] > 3ec20f2e
	I0719 12:01:32.214724    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15922.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 12:01:32.224361    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 12:01:32.233959    4831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:01:32.237294    4831 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:01:32.237400    4831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:01:32.237438    4831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 12:01:32.241505    4831 command_runner.go:130] > b5213941
	I0719 12:01:32.241702    4831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 12:01:32.250746    4831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 12:01:32.253757    4831 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 12:01:32.253842    4831 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 12:01:32.253876    4831 kubeadm.go:934] updating node {m02 192.169.0.18 8443 v1.30.3 docker false true} ...
	I0719 12:01:32.253936    4831 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-871000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 12:01:32.253975    4831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 12:01:32.261806    4831 command_runner.go:130] > kubeadm
	I0719 12:01:32.261814    4831 command_runner.go:130] > kubectl
	I0719 12:01:32.261817    4831 command_runner.go:130] > kubelet
	I0719 12:01:32.261923    4831 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 12:01:32.261965    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0719 12:01:32.270047    4831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0719 12:01:32.283477    4831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 12:01:32.297060    4831 ssh_runner.go:195] Run: grep 192.169.0.16	control-plane.minikube.internal$ /etc/hosts
	I0719 12:01:32.299937    4831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 12:01:32.309813    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:32.409600    4831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:01:32.424238    4831 host.go:66] Checking if "multinode-871000" exists ...
	I0719 12:01:32.424546    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.424566    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.433492    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53215
	I0719 12:01:32.433844    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.434194    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.434205    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.434437    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.434560    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:01:32.434657    4831 start.go:317] joinCluster: &{Name:multinode-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.19 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:01:32.434731    4831 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 12:01:32.434770    4831 host.go:66] Checking if "multinode-871000-m02" exists ...
	I0719 12:01:32.435045    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.435069    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.444130    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53217
	I0719 12:01:32.444475    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.444802    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.444813    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.445037    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.445149    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 12:01:32.445235    4831 mustload.go:65] Loading cluster: multinode-871000
	I0719 12:01:32.445434    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:32.445651    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.445669    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.454481    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53219
	I0719 12:01:32.454849    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.455206    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.455224    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.455431    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.455536    4831 main.go:141] libmachine: (multinode-871000) Calling .GetState
	I0719 12:01:32.455620    4831 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:32.455696    4831 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid from json: 4843
	I0719 12:01:32.456654    4831 host.go:66] Checking if "multinode-871000" exists ...
	I0719 12:01:32.456914    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:32.456938    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:32.465782    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53221
	I0719 12:01:32.466117    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:32.466453    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:32.466470    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:32.466689    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:32.466808    4831 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 12:01:32.466890    4831 api_server.go:166] Checking apiserver status ...
	I0719 12:01:32.466943    4831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:01:32.466953    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:01:32.467031    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:01:32.467122    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:01:32.467211    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:01:32.467290    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:01:32.508424    4831 command_runner.go:130] > 1608
	I0719 12:01:32.508522    4831 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1608/cgroup
	W0719 12:01:32.516261    4831 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1608/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 12:01:32.516331    4831 ssh_runner.go:195] Run: ls
	I0719 12:01:32.520049    4831 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 12:01:32.523301    4831 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0719 12:01:32.523370    4831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-871000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0719 12:01:32.605153    4831 command_runner.go:130] > node/multinode-871000-m02 cordoned
	I0719 12:01:35.622846    4831 command_runner.go:130] > pod "busybox-fc5497c4f-t7lpn" has DeletionTimestamp older than 1 seconds, skipping
	I0719 12:01:35.622859    4831 command_runner.go:130] > node/multinode-871000-m02 drained
	I0719 12:01:35.624380    4831 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-897rz, kube-system/kube-proxy-t9bqq
	I0719 12:01:35.624481    4831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-871000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.101108094s)
	I0719 12:01:35.624490    4831 node.go:128] successfully drained node "multinode-871000-m02"
	I0719 12:01:35.624512    4831 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0719 12:01:35.624530    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 12:01:35.624668    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 12:01:35.624765    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 12:01:35.624854    4831 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 12:01:35.624941    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 12:01:35.708033    4831 command_runner.go:130] > [preflight] Running pre-flight checks
	I0719 12:01:35.708391    4831 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0719 12:01:35.708453    4831 command_runner.go:130] > [reset] Stopping the kubelet service
	I0719 12:01:35.714749    4831 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0719 12:01:35.927378    4831 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0719 12:01:35.928149    4831 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0719 12:01:35.928161    4831 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0719 12:01:35.928170    4831 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0719 12:01:35.928176    4831 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0719 12:01:35.928181    4831 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0719 12:01:35.928186    4831 command_runner.go:130] > to reset your system's IPVS tables.
	I0719 12:01:35.928192    4831 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0719 12:01:35.928205    4831 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0719 12:01:35.929009    4831 command_runner.go:130] ! W0719 19:01:35.897039    1350 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0719 12:01:35.929046    4831 command_runner.go:130] ! W0719 19:01:36.115281    1350 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod fa390727edc254d6b9d466a058e2931134bb55963090ecee2afc18bba72c7d10: output: E0719 19:01:36.015602    1379 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-t7lpn_default\" network: cni config uninitialized" podSandboxID="fa390727edc254d6b9d466a058e2931134bb55963090ecee2afc18bba72c7d10"
	I0719 12:01:35.929059    4831 command_runner.go:130] ! time="2024-07-19T19:01:36Z" level=fatal msg="stopping the pod sandbox \"fa390727edc254d6b9d466a058e2931134bb55963090ecee2afc18bba72c7d10\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-t7lpn_default\" network: cni config uninitialized"
	I0719 12:01:35.929063    4831 command_runner.go:130] ! : exit status 1
	I0719 12:01:35.929075    4831 node.go:155] successfully reset node "multinode-871000-m02"
	I0719 12:01:35.929331    4831 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:01:35.929542    4831 kapi.go:59] client config for multinode-871000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xebf8ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 12:01:35.929798    4831 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0719 12:01:35.929828    4831 round_trippers.go:463] DELETE https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:35.929832    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:35.929841    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:35.929845    4831 round_trippers.go:473]     Content-Type: application/json
	I0719 12:01:35.929848    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:35.932548    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:35.932559    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:35.932564    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:35.932567    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:35.932570    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:35.932572    4831 round_trippers.go:580]     Content-Length: 171
	I0719 12:01:35.932577    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:36 GMT
	I0719 12:01:35.932580    4831 round_trippers.go:580]     Audit-Id: 39ce5c03-5c86-4cfe-8e92-cfccfd4d77aa
	I0719 12:01:35.932583    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:35.932593    4831 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-871000-m02","kind":"nodes","uid":"e0450b58-f42e-4eee-a22b-05f89b4b721d"}}
	I0719 12:01:35.932611    4831 node.go:180] successfully deleted node "multinode-871000-m02"
	I0719 12:01:35.932621    4831 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 12:01:35.932640    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 12:01:35.932656    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 12:01:35.932799    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 12:01:35.932885    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 12:01:35.932970    4831 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 12:01:35.933043    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 12:01:36.016218    4831 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pth09b.v0t542n3s0kf9m1k --discovery-token-ca-cert-hash sha256:afa13eeacf66fe5a050050bebf5083e6d92babcb46083a82ef00c5e81d9e788a 
	I0719 12:01:36.017205    4831 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 12:01:36.017227    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pth09b.v0t542n3s0kf9m1k --discovery-token-ca-cert-hash sha256:afa13eeacf66fe5a050050bebf5083e6d92babcb46083a82ef00c5e81d9e788a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-871000-m02"
	I0719 12:01:36.050536    4831 command_runner.go:130] > [preflight] Running pre-flight checks
	I0719 12:01:36.156351    4831 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0719 12:01:36.156369    4831 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0719 12:01:36.186848    4831 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 12:01:36.186863    4831 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 12:01:36.186882    4831 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0719 12:01:36.287480    4831 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 12:01:36.789623    4831 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.485895ms
	I0719 12:01:36.789640    4831 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0719 12:01:37.300697    4831 command_runner.go:130] > This node has joined the cluster:
	I0719 12:01:37.300712    4831 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0719 12:01:37.300718    4831 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0719 12:01:37.300723    4831 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0719 12:01:37.302165    4831 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 12:01:37.302235    4831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pth09b.v0t542n3s0kf9m1k --discovery-token-ca-cert-hash sha256:afa13eeacf66fe5a050050bebf5083e6d92babcb46083a82ef00c5e81d9e788a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-871000-m02": (1.284993769s)
	I0719 12:01:37.302253    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 12:01:37.409883    4831 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0719 12:01:37.515502    4831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-871000-m02 minikube.k8s.io/updated_at=2024_07_19T12_01_37_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=multinode-871000 minikube.k8s.io/primary=false
	I0719 12:01:37.585139    4831 command_runner.go:130] > node/multinode-871000-m02 labeled
	I0719 12:01:37.586417    4831 start.go:319] duration metric: took 5.151775223s to joinCluster
	I0719 12:01:37.586467    4831 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 12:01:37.586660    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:37.606739    4831 out.go:177] * Verifying Kubernetes components...
	I0719 12:01:37.649725    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:01:37.751645    4831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:01:37.764431    4831 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 12:01:37.764629    4831 kapi.go:59] client config for multinode-871000: &rest.Config{Host:"https://192.169.0.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1053/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xebf8ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 12:01:37.764804    4831 node_ready.go:35] waiting up to 6m0s for node "multinode-871000-m02" to be "Ready" ...
	I0719 12:01:37.764850    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:37.764855    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:37.764861    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:37.764865    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:37.766434    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:37.766446    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:37.766466    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:37.766478    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:37.766491    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:37.766499    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:37 GMT
	I0719 12:01:37.766510    4831 round_trippers.go:580]     Audit-Id: f7abf541-61fe-49a3-a985-9891f0494517
	I0719 12:01:37.766518    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:37.766721    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1087","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3564 chars]
	I0719 12:01:38.265874    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:38.265887    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:38.265893    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:38.265898    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:38.267485    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:38.267495    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:38.267500    4831 round_trippers.go:580]     Audit-Id: bc869004-cebf-4d35-8b5c-9ed6e4ee6eef
	I0719 12:01:38.267505    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:38.267508    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:38.267511    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:38.267513    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:38.267516    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:38 GMT
	I0719 12:01:38.267630    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:38.764931    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:38.764954    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:38.764961    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:38.764963    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:38.767327    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:38.767340    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:38.767345    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:38.767350    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:38.767362    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:38 GMT
	I0719 12:01:38.767369    4831 round_trippers.go:580]     Audit-Id: c26a5767-263b-444c-997e-0a00c04807d5
	I0719 12:01:38.767374    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:38.767379    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:38.767542    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:39.265019    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:39.265032    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:39.265038    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:39.265042    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:39.266727    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:39.266739    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:39.266747    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:39.266752    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:39 GMT
	I0719 12:01:39.266756    4831 round_trippers.go:580]     Audit-Id: 5fb15f0f-982e-4507-9642-d8d2d9abaeb8
	I0719 12:01:39.266762    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:39.266764    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:39.266767    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:39.266905    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:39.765285    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:39.765306    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:39.765318    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:39.765324    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:39.768076    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:39.768092    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:39.768114    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:39.768121    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:39.768125    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:39.768130    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:39.768135    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:39 GMT
	I0719 12:01:39.768139    4831 round_trippers.go:580]     Audit-Id: 2114d0e2-a351-41f4-bf52-988a5b256300
	I0719 12:01:39.768510    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:39.768756    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:40.265989    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:40.266008    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:40.266020    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:40.266025    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:40.268377    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:40.268398    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:40.268436    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:40.268461    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:40.268472    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:40.268478    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:40.268484    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:40 GMT
	I0719 12:01:40.268494    4831 round_trippers.go:580]     Audit-Id: d3ac57b3-9a60-4301-9fc5-fdce427f1686
	I0719 12:01:40.268599    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:40.765782    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:40.765806    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:40.765819    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:40.765824    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:40.768594    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:40.768613    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:40.768647    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:40.768681    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:40.768687    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:40.768691    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:40.768695    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:40 GMT
	I0719 12:01:40.768699    4831 round_trippers.go:580]     Audit-Id: 45ef635c-47c3-4ee4-b5a1-76eaff193f8b
	I0719 12:01:40.768936    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:41.266054    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:41.266075    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:41.266084    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:41.266093    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:41.268415    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:41.268431    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:41.268439    4831 round_trippers.go:580]     Audit-Id: b7059ccb-980e-49b9-a10a-3d5dccaceb5c
	I0719 12:01:41.268444    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:41.268449    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:41.268464    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:41.268468    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:41.268471    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:41 GMT
	I0719 12:01:41.268634    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:41.766644    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:41.766666    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:41.766675    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:41.766680    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:41.768834    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:41.768848    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:41.768853    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:41.768857    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:41 GMT
	I0719 12:01:41.768860    4831 round_trippers.go:580]     Audit-Id: e0e224f5-ec58-4ecf-af00-5adecd99eda3
	I0719 12:01:41.768863    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:41.768869    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:41.768873    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:41.768983    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:41.769156    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:42.266666    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:42.266689    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:42.266696    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:42.266699    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:42.268252    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:42.268264    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:42.268271    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:42.268276    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:42.268279    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:42.268283    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:42 GMT
	I0719 12:01:42.268285    4831 round_trippers.go:580]     Audit-Id: 340c137c-289d-4c8a-b5c7-ae0b763fc314
	I0719 12:01:42.268289    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:42.268368    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:42.765414    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:42.765430    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:42.765438    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:42.765442    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:42.767314    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:42.767325    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:42.767330    4831 round_trippers.go:580]     Audit-Id: 8580e686-730c-47e6-af94-c9f348ef24fc
	I0719 12:01:42.767333    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:42.767337    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:42.767340    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:42.767343    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:42.767345    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:42 GMT
	I0719 12:01:42.767464    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:43.265515    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:43.265530    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:43.265538    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:43.265542    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:43.267130    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:43.267160    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:43.267166    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:43.267172    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:43 GMT
	I0719 12:01:43.267174    4831 round_trippers.go:580]     Audit-Id: 6da3f2a4-a472-4418-9977-afe5cdc1923c
	I0719 12:01:43.267177    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:43.267185    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:43.267187    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:43.267294    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:43.765367    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:43.765393    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:43.765406    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:43.765412    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:43.768182    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:43.768197    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:43.768205    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:43 GMT
	I0719 12:01:43.768209    4831 round_trippers.go:580]     Audit-Id: 24ce980a-3121-45b3-a46a-050b0025e527
	I0719 12:01:43.768213    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:43.768218    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:43.768221    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:43.768224    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:43.768285    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:44.265772    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:44.265803    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:44.265887    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:44.265900    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:44.268379    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:44.268394    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:44.268401    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:44.268406    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:44.268411    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:44.268418    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:44.268425    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:44 GMT
	I0719 12:01:44.268430    4831 round_trippers.go:580]     Audit-Id: e3b9c9f7-69b1-432b-b77c-6beaa9d21a96
	I0719 12:01:44.268689    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:44.268907    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:44.765090    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:44.765110    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:44.765122    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:44.765129    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:44.767441    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:44.767454    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:44.767490    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:44.767499    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:44 GMT
	I0719 12:01:44.767503    4831 round_trippers.go:580]     Audit-Id: c4a012e5-0d69-4feb-86da-02ac3b9f71d9
	I0719 12:01:44.767509    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:44.767513    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:44.767519    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:44.767819    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:45.264985    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:45.265015    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:45.265028    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:45.265047    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:45.267769    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:45.267781    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:45.267788    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:45.267792    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:45 GMT
	I0719 12:01:45.267796    4831 round_trippers.go:580]     Audit-Id: 91fe7b24-4012-474f-b81a-d98e00c6c0b4
	I0719 12:01:45.267799    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:45.267803    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:45.267809    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:45.268174    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:45.764951    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:45.764964    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:45.764970    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:45.764972    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:45.766670    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:45.766679    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:45.766684    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:45 GMT
	I0719 12:01:45.766687    4831 round_trippers.go:580]     Audit-Id: 3f57bc9c-1895-425e-8f32-7c778ac1127f
	I0719 12:01:45.766690    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:45.766692    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:45.766695    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:45.766698    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:45.766743    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:46.265560    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:46.265587    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:46.265596    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:46.265602    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:46.269862    4831 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 12:01:46.269877    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:46.269884    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:46.269888    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:46.269902    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:46.269906    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:46.269911    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:46 GMT
	I0719 12:01:46.269915    4831 round_trippers.go:580]     Audit-Id: 40b37b98-047c-450f-b821-397b9e73ffb0
	I0719 12:01:46.269978    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:46.270197    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:46.764999    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:46.765011    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:46.765018    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:46.765021    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:46.766713    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:46.766725    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:46.766731    4831 round_trippers.go:580]     Audit-Id: 288bfc2e-d63e-46e9-9c5c-57dc3867758a
	I0719 12:01:46.766734    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:46.766736    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:46.766739    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:46.766742    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:46.766750    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:46 GMT
	I0719 12:01:46.766982    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1089","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3673 chars]
	I0719 12:01:47.266637    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:47.266657    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:47.266669    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:47.266675    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:47.269479    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:47.269492    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:47.269499    4831 round_trippers.go:580]     Audit-Id: 91cabe9e-70b8-45e7-90ef-5fc77704a9c2
	I0719 12:01:47.269504    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:47.269508    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:47.269512    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:47.269516    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:47.269521    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:47 GMT
	I0719 12:01:47.269722    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:47.765021    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:47.765044    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:47.765057    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:47.765065    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:47.767754    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:47.767772    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:47.767780    4831 round_trippers.go:580]     Audit-Id: e18f83de-7fef-4570-89eb-0bdf49eabdff
	I0719 12:01:47.767784    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:47.767788    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:47.767846    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:47.767855    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:47.767859    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:47 GMT
	I0719 12:01:47.767924    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:48.266348    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:48.266381    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:48.266388    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:48.266393    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:48.267614    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:48.267624    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:48.267630    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:48.267641    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:48 GMT
	I0719 12:01:48.267645    4831 round_trippers.go:580]     Audit-Id: c60994e0-3273-44f0-9b1f-fdb43c3b91ff
	I0719 12:01:48.267647    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:48.267650    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:48.267652    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:48.267923    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:48.766000    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:48.766021    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:48.766034    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:48.766041    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:48.768309    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:48.768322    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:48.768329    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:48.768334    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:48.768372    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:48.768380    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:48.768384    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:48 GMT
	I0719 12:01:48.768388    4831 round_trippers.go:580]     Audit-Id: a74a3bf0-4931-4d9b-bec9-86a67668fe03
	I0719 12:01:48.768624    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:48.768851    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:49.265151    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:49.265173    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:49.265185    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:49.265191    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:49.268093    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:49.268112    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:49.268124    4831 round_trippers.go:580]     Audit-Id: 2058fe3a-ebe9-4fcd-a764-3672fdd77552
	I0719 12:01:49.268132    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:49.268140    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:49.268145    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:49.268150    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:49.268154    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:49 GMT
	I0719 12:01:49.268309    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:49.766279    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:49.766312    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:49.766324    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:49.766338    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:49.768937    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:49.768954    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:49.768961    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:49.768965    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:49.768968    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:49 GMT
	I0719 12:01:49.768973    4831 round_trippers.go:580]     Audit-Id: 6b0e57a4-5b0a-4aaa-b980-9cd99b7c0667
	I0719 12:01:49.768979    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:49.768982    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:49.769050    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:50.265661    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:50.265685    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:50.265697    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:50.265703    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:50.268658    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:50.268675    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:50.268682    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:50.268687    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:50 GMT
	I0719 12:01:50.268691    4831 round_trippers.go:580]     Audit-Id: ae59334d-22b0-4b64-a826-0df64953b4cd
	I0719 12:01:50.268695    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:50.268698    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:50.268702    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:50.269379    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:50.764951    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:50.764964    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:50.764969    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:50.764986    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:50.766535    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:50.766545    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:50.766550    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:50.766553    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:50.766556    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:50.766559    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:50 GMT
	I0719 12:01:50.766573    4831 round_trippers.go:580]     Audit-Id: 710225d9-6f59-4f1f-84a4-e01469a3682c
	I0719 12:01:50.766578    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:50.766723    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:51.265932    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:51.265952    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:51.265972    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:51.265978    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:51.268141    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:51.268157    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:51.268165    4831 round_trippers.go:580]     Audit-Id: 16f84d0e-6391-4cee-8f4b-507ac564ac4c
	I0719 12:01:51.268169    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:51.268174    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:51.268179    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:51.268187    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:51.268191    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:51 GMT
	I0719 12:01:51.268324    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:51.268570    4831 node_ready.go:53] node "multinode-871000-m02" has status "Ready":"False"
	I0719 12:01:51.765832    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:51.765854    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:51.765866    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:51.765873    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:51.768567    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:51.768580    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:51.768587    4831 round_trippers.go:580]     Audit-Id: 71e78895-27d0-40b7-923a-177c5af8be35
	I0719 12:01:51.768593    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:51.768596    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:51.768599    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:51.768622    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:51.768631    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:51 GMT
	I0719 12:01:51.769068    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1121","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4065 chars]
	I0719 12:01:52.265410    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:52.265436    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.265513    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.265525    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.267784    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:52.267802    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.267820    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.267826    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.267834    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.267838    4831 round_trippers.go:580]     Audit-Id: 9999b67b-c754-4736-838b-505e58406082
	I0719 12:01:52.267841    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.267844    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.267908    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1134","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3931 chars]
	I0719 12:01:52.268141    4831 node_ready.go:49] node "multinode-871000-m02" has status "Ready":"True"
	I0719 12:01:52.268151    4831 node_ready.go:38] duration metric: took 14.503383738s for node "multinode-871000-m02" to be "Ready" ...
	I0719 12:01:52.268159    4831 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:01:52.268198    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods
	I0719 12:01:52.268206    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.268213    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.268218    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.270589    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:52.270606    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.270617    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.270629    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.270640    4831 round_trippers.go:580]     Audit-Id: 5bd94b47-fb60-4f3e-a0d8-8b3573293b04
	I0719 12:01:52.270659    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.270665    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.270674    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.271354    4831 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1134"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"1037","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86445 chars]
	I0719 12:01:52.273219    4831 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.273253    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-85r26
	I0719 12:01:52.273257    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.273262    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.273268    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.274481    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:52.274490    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.274507    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.274515    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.274518    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.274521    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.274525    4831 round_trippers.go:580]     Audit-Id: 4e6a1dda-7025-40a9-8dbb-8aff01d72511
	I0719 12:01:52.274528    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.274595    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-85r26","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"c7d62ec5-693b-46ab-9437-86aef8b469e8","resourceVersion":"1037","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"60854d1a-6a25-465d-90df-addb74e83410","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60854d1a-6a25-465d-90df-addb74e83410\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0719 12:01:52.274831    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:52.274839    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.274844    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.274849    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.275893    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:52.275901    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.275911    4831 round_trippers.go:580]     Audit-Id: 5caee3fd-28b0-4404-811c-7a58de2da195
	I0719 12:01:52.275916    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.275921    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.275926    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.275932    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.275937    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.276038    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:52.276204    4831 pod_ready.go:92] pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:52.276212    4831 pod_ready.go:81] duration metric: took 2.983057ms for pod "coredns-7db6d8ff4d-85r26" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.276218    4831 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.276249    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-871000
	I0719 12:01:52.276254    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.276259    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.276264    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.277157    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:52.277164    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.277169    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.277172    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.277174    4831 round_trippers.go:580]     Audit-Id: 71c4c88f-d401-43fe-88bc-540772973797
	I0719 12:01:52.277176    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.277179    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.277181    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.277316    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-871000","namespace":"kube-system","uid":"8818ed52-4b2d-4629-af02-b835e3cfa034","resourceVersion":"1020","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.16:2379","kubernetes.io/config.hash":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.mirror":"1f124d3eaf8e766329e8292bd8882f14","kubernetes.io/config.seen":"2024-07-19T18:55:05.740545259Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0719 12:01:52.277529    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:52.277536    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.277541    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.277544    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.278676    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:52.278684    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.278688    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.278691    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.278709    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.278728    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.278733    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.278736    4831 round_trippers.go:580]     Audit-Id: 3c6b0076-0a4d-4f0a-88ee-b7f12cc3d3fe
	I0719 12:01:52.278872    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:52.279045    4831 pod_ready.go:92] pod "etcd-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:52.279053    4831 pod_ready.go:81] duration metric: took 2.830582ms for pod "etcd-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.279063    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.279097    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-871000
	I0719 12:01:52.279102    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.279107    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.279111    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.279995    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:52.280002    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.280008    4831 round_trippers.go:580]     Audit-Id: cdc2aab4-c9fa-4cd3-b5ec-c5ecc59b279e
	I0719 12:01:52.280016    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.280019    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.280022    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.280025    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.280028    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.280327    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-871000","namespace":"kube-system","uid":"9f3fdf92-3cbd-411c-802e-cbbbe1b60d68","resourceVersion":"993","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.16:8443","kubernetes.io/config.hash":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.mirror":"1acc565de321609aa117f6402dfd5ae5","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548209Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0719 12:01:52.280566    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:52.280573    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.280579    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.280584    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.281573    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:52.281580    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.281587    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.281592    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.281595    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.281600    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.281604    4831 round_trippers.go:580]     Audit-Id: af92d4c3-07ce-47ed-a287-2c1f4da9f9e1
	I0719 12:01:52.281607    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.281712    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:52.281882    4831 pod_ready.go:92] pod "kube-apiserver-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:52.281889    4831 pod_ready.go:81] duration metric: took 2.821128ms for pod "kube-apiserver-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.281895    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.281928    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-871000
	I0719 12:01:52.281936    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.281941    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.281945    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.282956    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:52.282962    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.282967    4831 round_trippers.go:580]     Audit-Id: 37b3d6f6-d5cb-41bb-bb6a-0314d8dae796
	I0719 12:01:52.282970    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.282974    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.282979    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.282983    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.282986    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.283119    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-871000","namespace":"kube-system","uid":"74e143fb-26b8-4d1d-b07a-f1b2c590133f","resourceVersion":"1003","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.mirror":"8f11f40ce051787c8d8ced4f83327f27","kubernetes.io/config.seen":"2024-07-19T18:55:05.740548943Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0719 12:01:52.283339    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:52.283346    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.283351    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.283355    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.284256    4831 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 12:01:52.284262    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.284267    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.284271    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.284275    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.284280    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.284284    4831 round_trippers.go:580]     Audit-Id: 289b264b-8fe4-44bb-a7ac-cfbefde406df
	I0719 12:01:52.284287    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.284403    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:52.284570    4831 pod_ready.go:92] pod "kube-controller-manager-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:52.284577    4831 pod_ready.go:81] duration metric: took 2.676992ms for pod "kube-controller-manager-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.284584    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.466629    4831 request.go:629] Waited for 181.914475ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-86ssb
	I0719 12:01:52.466686    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-86ssb
	I0719 12:01:52.466695    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.466706    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.466715    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.469198    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:52.469214    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.469225    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.469231    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.469236    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.469240    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.469245    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.469249    4831 round_trippers.go:580]     Audit-Id: 429dceb6-e86e-4018-b556-14b2a2f022b2
	I0719 12:01:52.469457    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-86ssb","generateName":"kube-proxy-","namespace":"kube-system","uid":"37609942-98d8-4c6b-b339-53bf3a901e3f","resourceVersion":"1128","creationTimestamp":"2024-07-19T18:57:03Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:57:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0719 12:01:52.666944    4831 request.go:629] Waited for 197.144047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m03
	I0719 12:01:52.667016    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m03
	I0719 12:01:52.667030    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.667046    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.667057    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.669979    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:52.669997    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.670008    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.670013    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.670018    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.670022    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:52 GMT
	I0719 12:01:52.670045    4831 round_trippers.go:580]     Audit-Id: b275a4c2-4933-457c-b794-8cc1c82f8ff3
	I0719 12:01:52.670053    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.670190    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m03","uid":"4745805a-e01a-4411-b942-abcd092662c6","resourceVersion":"1125","creationTimestamp":"2024-07-19T18:59:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T11_59_53_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:59:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4301 chars]
	I0719 12:01:52.670429    4831 pod_ready.go:97] node "multinode-871000-m03" hosting pod "kube-proxy-86ssb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000-m03" has status "Ready":"Unknown"
	I0719 12:01:52.670443    4831 pod_ready.go:81] duration metric: took 385.855231ms for pod "kube-proxy-86ssb" in "kube-system" namespace to be "Ready" ...
	E0719 12:01:52.670470    4831 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-871000-m03" hosting pod "kube-proxy-86ssb" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-871000-m03" has status "Ready":"Unknown"
	I0719 12:01:52.670488    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:52.865457    4831 request.go:629] Waited for 194.920336ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:01:52.865509    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89hm2
	I0719 12:01:52.865515    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:52.865521    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:52.865525    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:52.869315    4831 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 12:01:52.869326    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:52.869343    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:52.869346    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:52.869350    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:52.869352    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:52.869356    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:53 GMT
	I0719 12:01:52.869359    4831 round_trippers.go:580]     Audit-Id: 6d8a42fa-5f07-4ba5-b901-8d8df07718db
	I0719 12:01:52.869579    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-89hm2","generateName":"kube-proxy-","namespace":"kube-system","uid":"77b4b485-53f0-4480-8b62-a1df26f037b8","resourceVersion":"979","creationTimestamp":"2024-07-19T18:55:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0719 12:01:53.066786    4831 request.go:629] Waited for 196.934315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:53.066915    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:53.066927    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:53.066938    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:53.066947    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:53.069488    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:53.069505    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:53.069512    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:53.069518    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:53.069531    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:53 GMT
	I0719 12:01:53.069536    4831 round_trippers.go:580]     Audit-Id: 29f7b1f1-8716-41e3-a3fb-e5054c127035
	I0719 12:01:53.069540    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:53.069543    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:53.069761    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:53.070013    4831 pod_ready.go:92] pod "kube-proxy-89hm2" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:53.070025    4831 pod_ready.go:81] duration metric: took 399.524656ms for pod "kube-proxy-89hm2" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:53.070034    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:53.266326    4831 request.go:629] Waited for 196.226835ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:01:53.266366    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t9bqq
	I0719 12:01:53.266372    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:53.266380    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:53.266384    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:53.267734    4831 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 12:01:53.267743    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:53.267748    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:53.267751    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:53.267754    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:53.267756    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:53.267759    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:53 GMT
	I0719 12:01:53.267762    4831 round_trippers.go:580]     Audit-Id: b4ba6119-891d-44da-b73d-627e20735b34
	I0719 12:01:53.267908    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t9bqq","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ef191fc-6e2e-486c-b825-76c6e0d95416","resourceVersion":"1107","creationTimestamp":"2024-07-19T18:56:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f00c7f30-00ff-4a89-8cef-af83638698fb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:56:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f00c7f30-00ff-4a89-8cef-af83638698fb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0719 12:01:53.466322    4831 request.go:629] Waited for 198.087992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:53.466383    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000-m02
	I0719 12:01:53.466393    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:53.466402    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:53.466438    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:53.469092    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:53.469109    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:53.469116    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:53.469126    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:53 GMT
	I0719 12:01:53.469130    4831 round_trippers.go:580]     Audit-Id: 6e4d112e-e44b-4be7-9c97-17550fbf549f
	I0719 12:01:53.469133    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:53.469136    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:53.469139    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:53.469255    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000-m02","uid":"72780593-74d8-4bbd-8918-42a093b65856","resourceVersion":"1135","creationTimestamp":"2024-07-19T19:01:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T12_01_37_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T19:01:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0719 12:01:53.469467    4831 pod_ready.go:92] pod "kube-proxy-t9bqq" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:53.469477    4831 pod_ready.go:81] duration metric: took 399.439151ms for pod "kube-proxy-t9bqq" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:53.469486    4831 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:53.667108    4831 request.go:629] Waited for 197.550527ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:01:53.667258    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-871000
	I0719 12:01:53.667270    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:53.667282    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:53.667290    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:53.669963    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:53.669979    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:53.669986    4831 round_trippers.go:580]     Audit-Id: df666b36-6b90-4074-890f-104b4903ef39
	I0719 12:01:53.670018    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:53.670026    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:53.670029    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:53.670046    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:53.670051    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:53 GMT
	I0719 12:01:53.670168    4831 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-871000","namespace":"kube-system","uid":"0d73182a-0458-470e-ac06-ccde27fa113a","resourceVersion":"1012","creationTimestamp":"2024-07-19T18:55:05Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.mirror":"b43ab2caff1f80690c8bfbb88ac08a85","kubernetes.io/config.seen":"2024-07-19T18:55:00.040869314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T18:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0719 12:01:53.866263    4831 request.go:629] Waited for 195.758116ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:53.866381    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes/multinode-871000
	I0719 12:01:53.866388    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:53.866398    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:53.866406    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:53.869123    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:53.869138    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:53.869145    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:53.869150    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:53.869155    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:53.869158    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:54 GMT
	I0719 12:01:53.869162    4831 round_trippers.go:580]     Audit-Id: 5ab4535d-1732-458f-bd9e-5973cb44efb7
	I0719 12:01:53.869165    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:53.869396    4831 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T18:55:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0719 12:01:53.869653    4831 pod_ready.go:92] pod "kube-scheduler-multinode-871000" in "kube-system" namespace has status "Ready":"True"
	I0719 12:01:53.869670    4831 pod_ready.go:81] duration metric: took 400.174664ms for pod "kube-scheduler-multinode-871000" in "kube-system" namespace to be "Ready" ...
	I0719 12:01:53.869679    4831 pod_ready.go:38] duration metric: took 1.601516924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 12:01:53.869700    4831 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 12:01:53.869770    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 12:01:53.880607    4831 system_svc.go:56] duration metric: took 10.90301ms WaitForService to wait for kubelet
	I0719 12:01:53.880632    4831 kubeadm.go:582] duration metric: took 16.294192012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:01:53.880647    4831 node_conditions.go:102] verifying NodePressure condition ...
	I0719 12:01:54.066856    4831 request.go:629] Waited for 186.137672ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.16:8443/api/v1/nodes
	I0719 12:01:54.066970    4831 round_trippers.go:463] GET https://192.169.0.16:8443/api/v1/nodes
	I0719 12:01:54.066980    4831 round_trippers.go:469] Request Headers:
	I0719 12:01:54.066991    4831 round_trippers.go:473]     Accept: application/json, */*
	I0719 12:01:54.066999    4831 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0719 12:01:54.069828    4831 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 12:01:54.069845    4831 round_trippers.go:577] Response Headers:
	I0719 12:01:54.069852    4831 round_trippers.go:580]     Audit-Id: 032d2722-6758-4f22-b522-865e94a62ee3
	I0719 12:01:54.069856    4831 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 12:01:54.069860    4831 round_trippers.go:580]     Content-Type: application/json
	I0719 12:01:54.069864    4831 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d1c10ea8-2ba5-4b86-bb92-cbc072e786a5
	I0719 12:01:54.069868    4831 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0941c278-d1c7-477e-8324-3d1b8c1d0c96
	I0719 12:01:54.069874    4831 round_trippers.go:580]     Date: Fri, 19 Jul 2024 19:01:54 GMT
	I0719 12:01:54.070339    4831 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1137"},"items":[{"metadata":{"name":"multinode-871000","uid":"66c9bca0-8514-490a-8d08-6b85092f337a","resourceVersion":"1042","creationTimestamp":"2024-07-19T18:55:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-871000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ffd2625ecdd21666acefb1ad4fc0b175f94ab221","minikube.k8s.io/name":"multinode-871000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T11_55_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15421 chars]
	I0719 12:01:54.070883    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:54.070895    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:54.070902    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:54.070906    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:54.070910    4831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 12:01:54.070931    4831 node_conditions.go:123] node cpu capacity is 2
	I0719 12:01:54.070942    4831 node_conditions.go:105] duration metric: took 190.291261ms to run NodePressure ...
	I0719 12:01:54.070955    4831 start.go:241] waiting for startup goroutines ...
	I0719 12:01:54.070981    4831 start.go:255] writing updated cluster config ...
	I0719 12:01:54.093124    4831 out.go:177] 
	I0719 12:01:54.115152    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:01:54.115281    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:54.137731    4831 out.go:177] * Starting "multinode-871000-m03" worker node in "multinode-871000" cluster
	I0719 12:01:54.180697    4831 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:01:54.180719    4831 cache.go:56] Caching tarball of preloaded images
	I0719 12:01:54.180837    4831 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 12:01:54.180846    4831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:01:54.180923    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:54.181476    4831 start.go:360] acquireMachinesLock for multinode-871000-m03: {Name:mk9f33e92e6d472bd2fb7a1dc1c9d72253ce59c8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:01:54.181527    4831 start.go:364] duration metric: took 34.731µs to acquireMachinesLock for "multinode-871000-m03"
	I0719 12:01:54.181541    4831 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:01:54.181546    4831 fix.go:54] fixHost starting: m03
	I0719 12:01:54.181769    4831 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:01:54.181782    4831 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:01:54.190693    4831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53227
	I0719 12:01:54.191078    4831 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:01:54.191409    4831 main.go:141] libmachine: Using API Version  1
	I0719 12:01:54.191427    4831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:01:54.191664    4831 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:01:54.191806    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:01:54.191900    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetState
	I0719 12:01:54.191983    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:54.192077    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | hyperkit pid from json: 4511
	I0719 12:01:54.192997    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | hyperkit pid 4511 missing from process table
	I0719 12:01:54.193036    4831 fix.go:112] recreateIfNeeded on multinode-871000-m03: state=Stopped err=<nil>
	I0719 12:01:54.193050    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	W0719 12:01:54.193163    4831 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:01:54.214531    4831 out.go:177] * Restarting existing hyperkit VM for "multinode-871000-m03" ...
	I0719 12:01:54.256856    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .Start
	I0719 12:01:54.257173    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:54.257206    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/hyperkit.pid
	I0719 12:01:54.257309    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Using UUID f7120658-3396-42ae-acb1-8416661a4529
	I0719 12:01:54.284634    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Generated MAC 5e:a3:f5:89:e4:9e
	I0719 12:01:54.284657    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000
	I0719 12:01:54.284793    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7120658-3396-42ae-acb1-8416661a4529", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b7a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0719 12:01:54.284824    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f7120658-3396-42ae-acb1-8416661a4529", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b7a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0719 12:01:54.284902    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f7120658-3396-42ae-acb1-8416661a4529", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/multinode-871000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/bzimage,/Users/j
enkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"}
	I0719 12:01:54.284935    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f7120658-3396-42ae-acb1-8416661a4529 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/multinode-871000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/tty,log=/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/bzimage,/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/mult
inode-871000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-871000"
	I0719 12:01:54.284960    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0719 12:01:54.286483    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 DEBUG: hyperkit: Pid is 4868
	I0719 12:01:54.287010    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Attempt 0
	I0719 12:01:54.287032    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:01:54.287126    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | hyperkit pid from json: 4868
	I0719 12:01:54.288216    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Searching for 5e:a3:f5:89:e4:9e in /var/db/dhcpd_leases ...
	I0719 12:01:54.288299    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0719 12:01:54.288315    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:36:3f:5c:47:18:4c ID:1,36:3f:5c:47:18:4c Lease:0x669c0983}
	I0719 12:01:54.288341    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f2:4c:c6:88:73:ec ID:1,f2:4c:c6:88:73:ec Lease:0x669c0959}
	I0719 12:01:54.288356    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:5e:a3:f5:89:e4:9e ID:1,5e:a3:f5:89:e4:9e Lease:0x669ab7be}
	I0719 12:01:54.288369    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | Found match: 5e:a3:f5:89:e4:9e
	I0719 12:01:54.288381    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | IP: 192.169.0.19
	I0719 12:01:54.288429    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetConfigRaw
	I0719 12:01:54.289107    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetIP
	I0719 12:01:54.289292    4831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/multinode-871000/config.json ...
	I0719 12:01:54.289738    4831 machine.go:94] provisionDockerMachine start ...
	I0719 12:01:54.289749    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:01:54.289886    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:01:54.290004    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:01:54.290104    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:01:54.290216    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:01:54.290300    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:01:54.290421    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:01:54.290589    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:01:54.290597    4831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 12:01:54.293969    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0719 12:01:54.302088    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0719 12:01:54.303180    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:01:54.303201    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:01:54.303211    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:01:54.303221    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:01:54.682894    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0719 12:01:54.682910    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0719 12:01:54.797679    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0719 12:01:54.797695    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0719 12:01:54.797703    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0719 12:01:54.797713    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0719 12:01:54.798562    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0719 12:01:54.798576    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:01:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0719 12:02:00.065970    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:02:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0719 12:02:00.066043    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:02:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0719 12:02:00.066053    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:02:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0719 12:02:00.089773    4831 main.go:141] libmachine: (multinode-871000-m03) DBG | 2024/07/19 12:02:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0719 12:02:29.359756    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 12:02:29.359774    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetMachineName
	I0719 12:02:29.359894    4831 buildroot.go:166] provisioning hostname "multinode-871000-m03"
	I0719 12:02:29.359902    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetMachineName
	I0719 12:02:29.360006    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.360091    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.360185    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.360263    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.360358    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.360484    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:29.360658    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:29.360668    4831 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-871000-m03 && echo "multinode-871000-m03" | sudo tee /etc/hostname
	I0719 12:02:29.431846    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-871000-m03
	
	I0719 12:02:29.431861    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.432004    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.432106    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.432218    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.432318    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.432436    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:29.432574    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:29.432587    4831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-871000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-871000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-871000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 12:02:29.498792    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 12:02:29.498812    4831 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19307-1053/.minikube CaCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19307-1053/.minikube}
	I0719 12:02:29.498823    4831 buildroot.go:174] setting up certificates
	I0719 12:02:29.498829    4831 provision.go:84] configureAuth start
	I0719 12:02:29.498837    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetMachineName
	I0719 12:02:29.498967    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetIP
	I0719 12:02:29.499068    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.499151    4831 provision.go:143] copyHostCerts
	I0719 12:02:29.499179    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:02:29.499239    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem, removing ...
	I0719 12:02:29.499245    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem
	I0719 12:02:29.499381    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/cert.pem (1123 bytes)
	I0719 12:02:29.499598    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:02:29.499639    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem, removing ...
	I0719 12:02:29.499644    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem
	I0719 12:02:29.499762    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/key.pem (1675 bytes)
	I0719 12:02:29.499934    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:02:29.499980    4831 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem, removing ...
	I0719 12:02:29.499985    4831 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem
	I0719 12:02:29.500065    4831 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19307-1053/.minikube/ca.pem (1078 bytes)
	I0719 12:02:29.500222    4831 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca-key.pem org=jenkins.multinode-871000-m03 san=[127.0.0.1 192.169.0.19 localhost minikube multinode-871000-m03]
	I0719 12:02:29.645278    4831 provision.go:177] copyRemoteCerts
	I0719 12:02:29.645324    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 12:02:29.645339    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.645484    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.645585    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.645676    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.645763    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/id_rsa Username:docker}
	I0719 12:02:29.682917    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 12:02:29.682996    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 12:02:29.702497    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 12:02:29.702567    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0719 12:02:29.722047    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 12:02:29.722114    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 12:02:29.741874    4831 provision.go:87] duration metric: took 243.037708ms to configureAuth
	I0719 12:02:29.741888    4831 buildroot.go:189] setting minikube options for container-runtime
	I0719 12:02:29.742066    4831 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:02:29.742080    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:29.742229    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.742333    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.742417    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.742507    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.742593    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.742699    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:29.742837    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:29.742846    4831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 12:02:29.803807    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 12:02:29.803823    4831 buildroot.go:70] root file system type: tmpfs
	I0719 12:02:29.803905    4831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 12:02:29.803915    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.804045    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.804139    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.804214    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.804302    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.804417    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:29.804564    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:29.804617    4831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.16"
	Environment="NO_PROXY=192.169.0.16,192.169.0.18"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 12:02:29.875877    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.16
	Environment=NO_PROXY=192.169.0.16,192.169.0.18
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 12:02:29.875896    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:29.876021    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:29.876125    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.876208    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:29.876292    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:29.876424    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:29.876575    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:29.876590    4831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 12:02:31.460754    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 12:02:31.460767    4831 machine.go:97] duration metric: took 37.171139772s to provisionDockerMachine
	I0719 12:02:31.460777    4831 start.go:293] postStartSetup for "multinode-871000-m03" (driver="hyperkit")
	I0719 12:02:31.460790    4831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 12:02:31.460801    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:31.460988    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 12:02:31.461003    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:31.461092    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:31.461178    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:31.461265    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:31.461358    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/id_rsa Username:docker}
	I0719 12:02:31.497118    4831 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 12:02:31.500034    4831 command_runner.go:130] > NAME=Buildroot
	I0719 12:02:31.500042    4831 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 12:02:31.500045    4831 command_runner.go:130] > ID=buildroot
	I0719 12:02:31.500049    4831 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 12:02:31.500053    4831 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 12:02:31.500192    4831 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 12:02:31.500199    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/addons for local assets ...
	I0719 12:02:31.500297    4831 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1053/.minikube/files for local assets ...
	I0719 12:02:31.500478    4831 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> 15922.pem in /etc/ssl/certs
	I0719 12:02:31.500488    4831 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem -> /etc/ssl/certs/15922.pem
	I0719 12:02:31.500693    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 12:02:31.507900    4831 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/ssl/certs/15922.pem --> /etc/ssl/certs/15922.pem (1708 bytes)
	I0719 12:02:31.527754    4831 start.go:296] duration metric: took 66.968583ms for postStartSetup
	I0719 12:02:31.527774    4831 fix.go:56] duration metric: took 37.346347633s for fixHost
	I0719 12:02:31.527790    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:31.527920    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:31.528025    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:31.528116    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:31.528197    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:31.528319    4831 main.go:141] libmachine: Using SSH client type: native
	I0719 12:02:31.528466    4831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd7540c0] 0xd756e20 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0719 12:02:31.528474    4831 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 12:02:31.588013    4831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415751.783803050
	
	I0719 12:02:31.588028    4831 fix.go:216] guest clock: 1721415751.783803050
	I0719 12:02:31.588034    4831 fix.go:229] Guest: 2024-07-19 12:02:31.78380305 -0700 PDT Remote: 2024-07-19 12:02:31.52778 -0700 PDT m=+119.162745825 (delta=256.02305ms)
	I0719 12:02:31.588048    4831 fix.go:200] guest clock delta is within tolerance: 256.02305ms
	I0719 12:02:31.588053    4831 start.go:83] releasing machines lock for "multinode-871000-m03", held for 37.406637946s
	I0719 12:02:31.588067    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:31.588193    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetIP
	I0719 12:02:31.609709    4831 out.go:177] * Found network options:
	I0719 12:02:31.631811    4831 out.go:177]   - NO_PROXY=192.169.0.16,192.169.0.18
	W0719 12:02:31.653553    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 12:02:31.653587    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 12:02:31.653606    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:31.654506    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:31.654807    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .DriverName
	I0719 12:02:31.654947    4831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 12:02:31.654991    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	W0719 12:02:31.655084    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 12:02:31.655117    4831 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 12:02:31.655203    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:31.655207    4831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 12:02:31.655271    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHHostname
	I0719 12:02:31.655389    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:31.655434    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHPort
	I0719 12:02:31.655606    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHKeyPath
	I0719 12:02:31.655635    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:31.655793    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/id_rsa Username:docker}
	I0719 12:02:31.655804    4831 main.go:141] libmachine: (multinode-871000-m03) Calling .GetSSHUsername
	I0719 12:02:31.655935    4831 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m03/id_rsa Username:docker}
	I0719 12:02:31.690038    4831 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 12:02:31.690089    4831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 12:02:31.690153    4831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 12:02:31.738898    4831 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 12:02:31.739073    4831 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 12:02:31.739115    4831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 12:02:31.739132    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:02:31.739257    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:02:31.755356    4831 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 12:02:31.755645    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 12:02:31.764199    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 12:02:31.773939    4831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 12:02:31.773998    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 12:02:31.782302    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:02:31.790388    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 12:02:31.798379    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 12:02:31.806750    4831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 12:02:31.816315    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 12:02:31.825398    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 12:02:31.834304    4831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 12:02:31.843358    4831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 12:02:31.851357    4831 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 12:02:31.851516    4831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 12:02:31.860150    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:02:31.955825    4831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 12:02:31.974599    4831 start.go:495] detecting cgroup driver to use...
	I0719 12:02:31.974665    4831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 12:02:31.989878    4831 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 12:02:31.990301    4831 command_runner.go:130] > [Unit]
	I0719 12:02:31.990311    4831 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 12:02:31.990318    4831 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 12:02:31.990323    4831 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 12:02:31.990328    4831 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 12:02:31.990333    4831 command_runner.go:130] > StartLimitBurst=3
	I0719 12:02:31.990337    4831 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 12:02:31.990340    4831 command_runner.go:130] > [Service]
	I0719 12:02:31.990343    4831 command_runner.go:130] > Type=notify
	I0719 12:02:31.990347    4831 command_runner.go:130] > Restart=on-failure
	I0719 12:02:31.990352    4831 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16
	I0719 12:02:31.990356    4831 command_runner.go:130] > Environment=NO_PROXY=192.169.0.16,192.169.0.18
	I0719 12:02:31.990364    4831 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 12:02:31.990371    4831 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 12:02:31.990377    4831 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 12:02:31.990383    4831 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 12:02:31.990388    4831 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 12:02:31.990394    4831 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 12:02:31.990403    4831 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 12:02:31.990409    4831 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 12:02:31.990415    4831 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 12:02:31.990418    4831 command_runner.go:130] > ExecStart=
	I0719 12:02:31.990430    4831 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0719 12:02:31.990435    4831 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 12:02:31.990441    4831 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 12:02:31.990446    4831 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 12:02:31.990450    4831 command_runner.go:130] > LimitNOFILE=infinity
	I0719 12:02:31.990453    4831 command_runner.go:130] > LimitNPROC=infinity
	I0719 12:02:31.990456    4831 command_runner.go:130] > LimitCORE=infinity
	I0719 12:02:31.990464    4831 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 12:02:31.990469    4831 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 12:02:31.990488    4831 command_runner.go:130] > TasksMax=infinity
	I0719 12:02:31.990495    4831 command_runner.go:130] > TimeoutStartSec=0
	I0719 12:02:31.990501    4831 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 12:02:31.990505    4831 command_runner.go:130] > Delegate=yes
	I0719 12:02:31.990521    4831 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 12:02:31.990527    4831 command_runner.go:130] > KillMode=process
	I0719 12:02:31.990532    4831 command_runner.go:130] > [Install]
	I0719 12:02:31.990538    4831 command_runner.go:130] > WantedBy=multi-user.target
	I0719 12:02:31.990604    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:02:32.002685    4831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 12:02:32.021379    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 12:02:32.031904    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:02:32.047913    4831 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 12:02:32.066032    4831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 12:02:32.076476    4831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 12:02:32.091156    4831 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 12:02:32.091445    4831 ssh_runner.go:195] Run: which cri-dockerd
	I0719 12:02:32.094387    4831 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 12:02:32.094573    4831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 12:02:32.101884    4831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 12:02:32.115549    4831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 12:02:32.212114    4831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 12:02:32.324274    4831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 12:02:32.324305    4831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 12:02:32.338049    4831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:02:32.429800    4831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 12:03:33.477808    4831 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0719 12:03:33.477824    4831 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0719 12:03:33.477834    4831 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.048215261s)
	I0719 12:03:33.477889    4831 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 12:03:33.487534    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0719 12:03:33.487548    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.489039182Z" level=info msg="Starting up"
	I0719 12:03:33.487561    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.489485651Z" level=info msg="containerd not running, starting managed containerd"
	I0719 12:03:33.487573    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.490106672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	I0719 12:03:33.487582    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.504729944Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 12:03:33.487592    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519842957Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 12:03:33.487605    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519924102Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 12:03:33.487614    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519989972Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 12:03:33.487623    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520025226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487634    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520192589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 12:03:33.487644    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520242309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487666    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520383559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 12:03:33.487675    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520429744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487687    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520463815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 12:03:33.487699    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520494329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487709    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520622328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487718    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520824297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487731    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522368920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 12:03:33.487741    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522413855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 12:03:33.487841    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522541465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 12:03:33.487858    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522582111Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 12:03:33.487869    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522705501Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 12:03:33.487877    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522755283Z" level=info msg="metadata content store policy set" policy=shared
	I0719 12:03:33.487886    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524108114Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 12:03:33.487895    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524211538Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 12:03:33.487904    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524258430Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 12:03:33.487913    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524359849Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 12:03:33.487921    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524403870Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 12:03:33.487932    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524475611Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 12:03:33.487941    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524693533Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 12:03:33.487950    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524857653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 12:03:33.487961    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524902532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 12:03:33.487971    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524935305Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 12:03:33.487983    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524974256Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.487994    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525010368Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488004    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525041413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488013    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525072409Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488023    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525104745Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488032    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525139114Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488111    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525170076Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488125    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525200241Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 12:03:33.488137    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525237119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488146    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525272787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488155    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525304916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488163    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525339108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488172    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525371160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488181    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525406650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488189    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525439163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488198    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525469499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488207    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525502037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488218    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525533873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488227    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525563372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488236    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525592721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488244    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525622341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488253    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525653422Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 12:03:33.488261    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525690287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488270    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525721827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488279    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525751498Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 12:03:33.488288    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525806277Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 12:03:33.488299    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525842248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 12:03:33.488309    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525874949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 12:03:33.488456    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525905187Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 12:03:33.488467    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525935128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 12:03:33.488478    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526093302Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 12:03:33.488486    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526134238Z" level=info msg="NRI interface is disabled by configuration."
	I0719 12:03:33.488494    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526368235Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 12:03:33.488502    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526492146Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 12:03:33.488510    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526555812Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 12:03:33.488517    4831 command_runner.go:130] > Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526592041Z" level=info msg="containerd successfully booted in 0.022526s"
	I0719 12:03:33.488525    4831 command_runner.go:130] > Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.512068043Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 12:03:33.488533    4831 command_runner.go:130] > Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.521019942Z" level=info msg="Loading containers: start."
	I0719 12:03:33.488551    4831 command_runner.go:130] > Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.616685011Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 12:03:33.488562    4831 command_runner.go:130] > Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.681522031Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 12:03:33.488570    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.614445200Z" level=info msg="Loading containers: done."
	I0719 12:03:33.488579    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.631575085Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 12:03:33.488587    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.631860425Z" level=info msg="Daemon has completed initialization"
	I0719 12:03:33.488594    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.655164938Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 12:03:33.488602    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.655239665Z" level=info msg="API listen on [::]:2376"
	I0719 12:03:33.488607    4831 command_runner.go:130] > Jul 19 19:02:31 multinode-871000-m03 systemd[1]: Started Docker Application Container Engine.
	I0719 12:03:33.488614    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.638339545Z" level=info msg="Processing signal 'terminated'"
	I0719 12:03:33.488619    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 systemd[1]: Stopping Docker Application Container Engine...
	I0719 12:03:33.488629    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639494009Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 12:03:33.488640    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639765769Z" level=info msg="Daemon shutdown complete"
	I0719 12:03:33.488648    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639870632Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 12:03:33.488681    4831 command_runner.go:130] > Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.640041119Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 12:03:33.488687    4831 command_runner.go:130] > Jul 19 19:02:33 multinode-871000-m03 systemd[1]: docker.service: Deactivated successfully.
	I0719 12:03:33.488696    4831 command_runner.go:130] > Jul 19 19:02:33 multinode-871000-m03 systemd[1]: Stopped Docker Application Container Engine.
	I0719 12:03:33.488701    4831 command_runner.go:130] > Jul 19 19:02:33 multinode-871000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0719 12:03:33.488709    4831 command_runner.go:130] > Jul 19 19:02:33 multinode-871000-m03 dockerd[846]: time="2024-07-19T19:02:33.684394739Z" level=info msg="Starting up"
	I0719 12:03:33.488719    4831 command_runner.go:130] > Jul 19 19:03:33 multinode-871000-m03 dockerd[846]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0719 12:03:33.488728    4831 command_runner.go:130] > Jul 19 19:03:33 multinode-871000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0719 12:03:33.488734    4831 command_runner.go:130] > Jul 19 19:03:33 multinode-871000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0719 12:03:33.488741    4831 command_runner.go:130] > Jul 19 19:03:33 multinode-871000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	I0719 12:03:33.513298    4831 out.go:177] 
	W0719 12:03:33.535237    4831 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 19:02:29 multinode-871000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.489039182Z" level=info msg="Starting up"
	Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.489485651Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 19:02:29 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:29.490106672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.504729944Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519842957Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519924102Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.519989972Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520025226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520192589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520242309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520383559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520429744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520463815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520494329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520622328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.520824297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522368920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522413855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522541465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522582111Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522705501Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.522755283Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524108114Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524211538Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524258430Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524359849Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524403870Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524475611Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524693533Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524857653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524902532Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524935305Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.524974256Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525010368Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525041413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525072409Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525104745Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525139114Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525170076Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525200241Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525237119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525272787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525304916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525339108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525371160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525406650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525439163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525469499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525502037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525533873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525563372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525592721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525622341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525653422Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525690287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525721827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525751498Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525806277Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525842248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525874949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525905187Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.525935128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526093302Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526134238Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526368235Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526492146Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526555812Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 19:02:29 multinode-871000-m03 dockerd[516]: time="2024-07-19T19:02:29.526592041Z" level=info msg="containerd successfully booted in 0.022526s"
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.512068043Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.521019942Z" level=info msg="Loading containers: start."
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.616685011Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 19:02:30 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:30.681522031Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.614445200Z" level=info msg="Loading containers: done."
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.631575085Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.631860425Z" level=info msg="Daemon has completed initialization"
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.655164938Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 19:02:31 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:31.655239665Z" level=info msg="API listen on [::]:2376"
	Jul 19 19:02:31 multinode-871000-m03 systemd[1]: Started Docker Application Container Engine.
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.638339545Z" level=info msg="Processing signal 'terminated'"
	Jul 19 19:02:32 multinode-871000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639494009Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639765769Z" level=info msg="Daemon shutdown complete"
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.639870632Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 19:02:32 multinode-871000-m03 dockerd[509]: time="2024-07-19T19:02:32.640041119Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 19:02:33 multinode-871000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 19:02:33 multinode-871000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 19:02:33 multinode-871000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 19:02:33 multinode-871000-m03 dockerd[846]: time="2024-07-19T19:02:33.684394739Z" level=info msg="Starting up"
	Jul 19 19:03:33 multinode-871000-m03 dockerd[846]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 19:03:33 multinode-871000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 19:03:33 multinode-871000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 19:03:33 multinode-871000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 12:03:33.535347    4831 out.go:239] * 
	W0719 12:03:33.536533    4831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:03:33.599187    4831 out.go:177] 
	
	
	==> Docker <==
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.282867145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.283151895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.283366771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.390150169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.390236615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.390250357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.390555288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 19:01:12 multinode-871000 cri-dockerd[1113]: time="2024-07-19T19:01:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ebd39f41e6638f64b363d2934086fdc2ede8b862882c0e08cf5d2c5295eb7a8/resolv.conf as [nameserver 192.169.0.1]"
	Jul 19 19:01:12 multinode-871000 cri-dockerd[1113]: time="2024-07-19T19:01:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/76e4441c9547575012f12984a478c391721c82750cf3de8f62cc3cfc0c4c4556/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.569504100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.569595330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.569657165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.569868067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.617913728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.617957700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.618003623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 19:01:12 multinode-871000 dockerd[866]: time="2024-07-19T19:01:12.618076618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 19:01:27 multinode-871000 dockerd[860]: time="2024-07-19T19:01:27.039582597Z" level=info msg="ignoring event" container=2c75948ee72dd9604b9f72f9045d73c3b8ae7526147229be4b9ff18692570469 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 19:01:27 multinode-871000 dockerd[866]: time="2024-07-19T19:01:27.040176373Z" level=info msg="shim disconnected" id=2c75948ee72dd9604b9f72f9045d73c3b8ae7526147229be4b9ff18692570469 namespace=moby
	Jul 19 19:01:27 multinode-871000 dockerd[866]: time="2024-07-19T19:01:27.040399926Z" level=warning msg="cleaning up after shim disconnected" id=2c75948ee72dd9604b9f72f9045d73c3b8ae7526147229be4b9ff18692570469 namespace=moby
	Jul 19 19:01:27 multinode-871000 dockerd[866]: time="2024-07-19T19:01:27.040442166Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 19:01:42 multinode-871000 dockerd[866]: time="2024-07-19T19:01:42.471042114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 19:01:42 multinode-871000 dockerd[866]: time="2024-07-19T19:01:42.471106222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 19:01:42 multinode-871000 dockerd[866]: time="2024-07-19T19:01:42.471119131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 19:01:42 multinode-871000 dockerd[866]: time="2024-07-19T19:01:42.471460435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	54030381b2b22       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   afe529bda89ad       storage-provisioner
	9f499474ed5de       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   76e4441c95475       busybox-fc5497c4f-4vlzm
	b82c0a7be5261       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   5ebd39f41e663       coredns-7db6d8ff4d-85r26
	8b0c9d8235c35       6f1d07c71fa0f                                                                                         2 minutes ago        Running             kindnet-cni               1                   16a6a23a49faf       kindnet-hht5h
	2c75948ee72dd       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   afe529bda89ad       storage-provisioner
	5d2d94a02ef7b       55bb025d2cfa5                                                                                         2 minutes ago        Running             kube-proxy                1                   c840d59ef28c7       kube-proxy-89hm2
	d1e31a8cb0057       3edc18e7b7672                                                                                         2 minutes ago        Running             kube-scheduler            1                   2dcf7d2254473       kube-scheduler-multinode-871000
	487a7988900e1       76932a3b37d7e                                                                                         2 minutes ago        Running             kube-controller-manager   1                   4ad08359d1c75       kube-controller-manager-multinode-871000
	38214d713c3f0       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      1                   6b34bbe29e963       etcd-multinode-871000
	db6b52e15dbb3       1f6d574d502f3                                                                                         2 minutes ago        Running             kube-apiserver            1                   59687516d13da       kube-apiserver-multinode-871000
	5a914d8aa459f       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   6 minutes ago        Exited              busybox                   0                   c10522c239f86       busybox-fc5497c4f-4vlzm
	6ddb80b3c9e90       cbb01a7bd410d                                                                                         7 minutes ago        Exited              coredns                   0                   c0dd65646579f       coredns-7db6d8ff4d-85r26
	9fb6361ebde60       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              8 minutes ago        Exited              kindnet-cni               0                   587cdaf6e20cf       kindnet-hht5h
	a2327b8c83c03       55bb025d2cfa5                                                                                         8 minutes ago        Exited              kube-proxy                0                   492c042de032e       kube-proxy-89hm2
	a094a5e71d559       1f6d574d502f3                                                                                         8 minutes ago        Exited              kube-apiserver            0                   48bd43fcf8d2e       kube-apiserver-multinode-871000
	a69e88441e03d       3edc18e7b7672                                                                                         8 minutes ago        Exited              kube-scheduler            0                   ce0d6620b5f9c       kube-scheduler-multinode-871000
	e5a9045d55789       76932a3b37d7e                                                                                         8 minutes ago        Exited              kube-controller-manager   0                   2fb0e3bd31459       kube-controller-manager-multinode-871000
	72d515f79956b       3861cfcd7c04c                                                                                         8 minutes ago        Exited              etcd                      0                   ae60ee8266a73       etcd-multinode-871000
	
	
	==> coredns [6ddb80b3c9e9] <==
	[INFO] 10.244.1.2:41820 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000042835s
	[INFO] 10.244.1.2:45279 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063565s
	[INFO] 10.244.1.2:55905 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049151s
	[INFO] 10.244.1.2:43623 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000055295s
	[INFO] 10.244.1.2:35200 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074316s
	[INFO] 10.244.1.2:57278 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064189s
	[INFO] 10.244.1.2:58459 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096781s
	[INFO] 10.244.0.3:39210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093815s
	[INFO] 10.244.0.3:49967 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00003235s
	[INFO] 10.244.0.3:33878 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000026602s
	[INFO] 10.244.0.3:47777 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031689s
	[INFO] 10.244.1.2:60208 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093418s
	[INFO] 10.244.1.2:33523 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069184s
	[INFO] 10.244.1.2:41540 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063874s
	[INFO] 10.244.1.2:46201 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040878s
	[INFO] 10.244.0.3:36085 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076742s
	[INFO] 10.244.0.3:41624 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000057247s
	[INFO] 10.244.0.3:57996 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000038206s
	[INFO] 10.244.0.3:45988 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000043217s
	[INFO] 10.244.1.2:60294 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156573s
	[INFO] 10.244.1.2:47315 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000050798s
	[INFO] 10.244.1.2:34156 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000040569s
	[INFO] 10.244.1.2:51809 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000035812s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b82c0a7be526] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60513 - 52757 "HINFO IN 3107668466391866589.731172759676700578. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.010037944s
	
	
	==> describe nodes <==
	Name:               multinode-871000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-871000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=multinode-871000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T11_55_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:55:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-871000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:03:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:01:10 +0000   Fri, 19 Jul 2024 18:55:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:01:10 +0000   Fri, 19 Jul 2024 18:55:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:01:10 +0000   Fri, 19 Jul 2024 18:55:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:01:10 +0000   Fri, 19 Jul 2024 19:01:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.16
	  Hostname:    multinode-871000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 320ed646183647edaa9be1493a968de0
	  System UUID:                50734d54-0000-0000-9eb1-76002314766d
	  Boot ID:                    d84df304-67ff-4de6-a4c0-13173e72ed6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4vlzm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	  kube-system                 coredns-7db6d8ff4d-85r26                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m15s
	  kube-system                 etcd-multinode-871000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m30s
	  kube-system                 kindnet-hht5h                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m15s
	  kube-system                 kube-apiserver-multinode-871000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-controller-manager-multinode-871000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-proxy-89hm2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-scheduler-multinode-871000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m14s                  kube-proxy       
	  Normal  Starting                 2m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m35s (x8 over 8m35s)  kubelet          Node multinode-871000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s (x8 over 8m35s)  kubelet          Node multinode-871000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s (x7 over 8m35s)  kubelet          Node multinode-871000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m30s                  kubelet          Node multinode-871000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m30s                  kubelet          Node multinode-871000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s                  kubelet          Node multinode-871000 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m30s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m16s                  node-controller  Node multinode-871000 event: Registered Node multinode-871000 in Controller
	  Normal  NodeReady                8m                     kubelet          Node multinode-871000 status is now: NodeReady
	  Normal  Starting                 2m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m43s (x8 over 2m43s)  kubelet          Node multinode-871000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node multinode-871000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s (x7 over 2m43s)  kubelet          Node multinode-871000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m27s                  node-controller  Node multinode-871000 event: Registered Node multinode-871000 in Controller
	
	
	Name:               multinode-871000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-871000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=multinode-871000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T12_01_37_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:01:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-871000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:03:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:01:52 +0000   Fri, 19 Jul 2024 19:01:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:01:52 +0000   Fri, 19 Jul 2024 19:01:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:01:52 +0000   Fri, 19 Jul 2024 19:01:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:01:52 +0000   Fri, 19 Jul 2024 19:01:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.18
	  Hostname:    multinode-871000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 7eb6d624fb0d46dbab820a3020871299
	  System UUID:                01564ae8-0000-0000-8601-a045f8c107f0
	  Boot ID:                    fd67b6ba-c15e-4981-b29d-32cda903f6ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-897rz       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m21s
	  kube-system                 kube-proxy-t9bqq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m14s                  kube-proxy  
	  Normal  Starting                 115s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  7m22s (x2 over 7m22s)  kubelet     Node multinode-871000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m22s (x2 over 7m22s)  kubelet     Node multinode-871000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m22s (x2 over 7m22s)  kubelet     Node multinode-871000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m21s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m59s                  kubelet     Node multinode-871000-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  119s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  118s (x2 over 119s)    kubelet     Node multinode-871000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x2 over 119s)    kubelet     Node multinode-871000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x2 over 119s)    kubelet     Node multinode-871000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                103s                   kubelet     Node multinode-871000-m02 status is now: NodeReady
	
	
	Name:               multinode-871000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-871000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=multinode-871000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T11_59_53_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:59:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-871000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:00:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 19:00:11 +0000   Fri, 19 Jul 2024 19:01:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 19:00:11 +0000   Fri, 19 Jul 2024 19:01:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 19:00:11 +0000   Fri, 19 Jul 2024 19:01:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 19:00:11 +0000   Fri, 19 Jul 2024 19:01:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.19
	  Hostname:    multinode-871000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 7967e114c4314594a6a7497dbb141ef8
	  System UUID:                f71242ae-0000-0000-acb1-8416661a4529
	  Boot ID:                    4326fc35-1ba6-43af-ae1e-9e69a491a21f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hlwxl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kindnet-4stbd              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m32s
	  kube-system                 kube-proxy-86ssb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m25s                  kube-proxy       
	  Normal  Starting                 3m39s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    6m33s (x2 over 6m33s)  kubelet          Node multinode-871000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s (x2 over 6m33s)  kubelet          Node multinode-871000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m33s (x2 over 6m33s)  kubelet          Node multinode-871000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                6m10s                  kubelet          Node multinode-871000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m42s (x2 over 3m42s)  kubelet          Node multinode-871000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s (x2 over 3m42s)  kubelet          Node multinode-871000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s (x2 over 3m42s)  kubelet          Node multinode-871000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m24s                  kubelet          Node multinode-871000-m03 status is now: NodeReady
	  Normal  RegisteredNode           2m27s                  node-controller  Node multinode-871000-m03 event: Registered Node multinode-871000-m03 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node multinode-871000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +5.356106] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006953] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.498396] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.209230] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.249916] systemd-fstab-generator[477]: Ignoring "noauto" option for root device
	[  +0.100946] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +1.826151] systemd-fstab-generator[790]: Ignoring "noauto" option for root device
	[  +0.054559] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.201499] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.105097] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +0.115804] systemd-fstab-generator[852]: Ignoring "noauto" option for root device
	[  +2.453258] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.110851] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	[  +0.094932] systemd-fstab-generator[1090]: Ignoring "noauto" option for root device
	[  +0.125497] systemd-fstab-generator[1105]: Ignoring "noauto" option for root device
	[  +0.404264] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +1.621259] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.058808] kauditd_printk_skb: 224 callbacks suppressed
	[  +5.022057] kauditd_printk_skb: 87 callbacks suppressed
	[  +2.522596] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[Jul19 19:01] kauditd_printk_skb: 45 callbacks suppressed
	[ +18.565030] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [38214d713c3f] <==
	{"level":"info","ts":"2024-07-19T19:00:53.473733Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T19:00:53.473805Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T19:00:53.471954Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T19:00:53.47197Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-07-19T19:00:53.472231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa switched to configuration voters=(1317664063532327594)"}
	{"level":"info","ts":"2024-07-19T19:00:53.474201Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1e23f9358b15cc2f","local-member-id":"1249487c082462aa","added-peer-id":"1249487c082462aa","added-peer-peer-urls":["https://192.169.0.16:2380"]}
	{"level":"info","ts":"2024-07-19T19:00:53.474567Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1249487c082462aa","initial-advertise-peer-urls":["https://192.169.0.16:2380"],"listen-peer-urls":["https://192.169.0.16:2380"],"advertise-client-urls":["https://192.169.0.16:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.16:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T19:00:53.474618Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T19:00:53.477338Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e23f9358b15cc2f","local-member-id":"1249487c082462aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:00:53.478768Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:00:53.480679Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-07-19T19:00:54.525767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T19:00:54.525809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T19:00:54.525831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgPreVoteResp from 1249487c082462aa at term 2"}
	{"level":"info","ts":"2024-07-19T19:00:54.525842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T19:00:54.525908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgVoteResp from 1249487c082462aa at term 3"}
	{"level":"info","ts":"2024-07-19T19:00:54.525918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became leader at term 3"}
	{"level":"info","ts":"2024-07-19T19:00:54.525924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1249487c082462aa elected leader 1249487c082462aa at term 3"}
	{"level":"info","ts":"2024-07-19T19:00:54.52664Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1249487c082462aa","local-member-attributes":"{Name:multinode-871000 ClientURLs:[https://192.169.0.16:2379]}","request-path":"/0/members/1249487c082462aa/attributes","cluster-id":"1e23f9358b15cc2f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T19:00:54.526784Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:00:54.526917Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:00:54.528404Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.16:2379"}
	{"level":"info","ts":"2024-07-19T19:00:54.528804Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T19:00:54.528834Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T19:00:54.529794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [72d515f79956] <==
	{"level":"info","ts":"2024-07-19T18:55:01.987945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T18:55:01.988016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa received MsgVoteResp from 1249487c082462aa at term 2"}
	{"level":"info","ts":"2024-07-19T18:55:01.988138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1249487c082462aa became leader at term 2"}
	{"level":"info","ts":"2024-07-19T18:55:01.988214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1249487c082462aa elected leader 1249487c082462aa at term 2"}
	{"level":"info","ts":"2024-07-19T18:55:01.993772Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:55:01.995882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T18:55:01.99395Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1249487c082462aa","local-member-attributes":"{Name:multinode-871000 ClientURLs:[https://192.169.0.16:2379]}","request-path":"/0/members/1249487c082462aa/attributes","cluster-id":"1e23f9358b15cc2f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T18:55:01.996074Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e23f9358b15cc2f","local-member-id":"1249487c082462aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:55:01.996319Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T18:55:01.996182Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:55:01.998762Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:55:02.001968Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.16:2379"}
	{"level":"info","ts":"2024-07-19T18:55:02.009952Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T18:55:02.010141Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T18:55:02.014688Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T19:00:24.58046Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T19:00:24.580499Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-871000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.16:2380"],"advertise-client-urls":["https://192.169.0.16:2379"]}
	{"level":"warn","ts":"2024-07-19T19:00:24.580551Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:00:24.580611Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:00:24.605983Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.16:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T19:00:24.606024Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.16:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T19:00:24.606079Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1249487c082462aa","current-leader-member-id":"1249487c082462aa"}
	{"level":"info","ts":"2024-07-19T19:00:24.607151Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-07-19T19:00:24.607266Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.16:2380"}
	{"level":"info","ts":"2024-07-19T19:00:24.607277Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-871000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.16:2380"],"advertise-client-urls":["https://192.169.0.16:2379"]}
	
	
	==> kernel <==
	 19:03:35 up 3 min,  0 users,  load average: 0.17, 0.15, 0.06
	Linux multinode-871000 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8b0c9d8235c3] <==
	I0719 19:02:48.254437       1 main.go:322] Node multinode-871000-m03 has CIDR [10.244.3.0/24] 
	I0719 19:02:58.253976       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0719 19:02:58.254088       1 main.go:299] handling current node
	I0719 19:02:58.254107       1 main.go:295] Handling node with IPs: map[192.169.0.18:{}]
	I0719 19:02:58.254115       1 main.go:322] Node multinode-871000-m02 has CIDR [10.244.1.0/24] 
	I0719 19:02:58.254370       1 main.go:295] Handling node with IPs: map[192.169.0.19:{}]
	I0719 19:02:58.254425       1 main.go:322] Node multinode-871000-m03 has CIDR [10.244.3.0/24] 
	I0719 19:03:08.261727       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0719 19:03:08.261891       1 main.go:299] handling current node
	I0719 19:03:08.261951       1 main.go:295] Handling node with IPs: map[192.169.0.18:{}]
	I0719 19:03:08.261997       1 main.go:322] Node multinode-871000-m02 has CIDR [10.244.1.0/24] 
	I0719 19:03:08.262083       1 main.go:295] Handling node with IPs: map[192.169.0.19:{}]
	I0719 19:03:08.262138       1 main.go:322] Node multinode-871000-m03 has CIDR [10.244.3.0/24] 
	I0719 19:03:18.254526       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0719 19:03:18.254597       1 main.go:299] handling current node
	I0719 19:03:18.254615       1 main.go:295] Handling node with IPs: map[192.169.0.18:{}]
	I0719 19:03:18.254625       1 main.go:322] Node multinode-871000-m02 has CIDR [10.244.1.0/24] 
	I0719 19:03:18.255233       1 main.go:295] Handling node with IPs: map[192.169.0.19:{}]
	I0719 19:03:18.255288       1 main.go:322] Node multinode-871000-m03 has CIDR [10.244.3.0/24] 
	I0719 19:03:28.253251       1 main.go:295] Handling node with IPs: map[192.169.0.18:{}]
	I0719 19:03:28.253295       1 main.go:322] Node multinode-871000-m02 has CIDR [10.244.1.0/24] 
	I0719 19:03:28.253443       1 main.go:295] Handling node with IPs: map[192.169.0.19:{}]
	I0719 19:03:28.253511       1 main.go:322] Node multinode-871000-m03 has CIDR [10.244.3.0/24] 
	I0719 19:03:28.253595       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0719 19:03:28.253623       1 main.go:299] handling current node
	
	
	==> kindnet [9fb6361ebde6] <==
	I0719 18:59:45.488841       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0719 18:59:45.489018       1 main.go:299] handling current node
	I0719 18:59:45.489059       1 main.go:295] Handling node with IPs: map[192.169.0.18:{}]
	I0719 18:59:45.489260       1 main.go:322] Node multinode-871000-m02 has CIDR [10.244.1.0/24] 
	I0719 18:59:45.489606       1 main.go:295] Handling node with IPs: map[192.169.0.19:{}]
	I0719 18:59:45.489715       1 main.go:322] Node multinode-871000-m03 has CIDR [10.244.2.0/24] 
	I0719 18:59:55.484006       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0719 18:59:55.484134       1 main.go:299] handling current node
	I0719 18:59:55.484173       1 main.go:295] Handling node with IPs: map[192.169.0.18:{}]
	I0719 18:59:55.484200       1 main.go:322] Node multinode-871000-m02 has CIDR [10.244.1.0/24] 
	I0719 18:59:55.484334       1 main.go:295] Handling node with IPs: map[192.169.0.19:{}]
	I0719 18:59:55.484412       1 main.go:322] Node multinode-871000-m03 has CIDR [10.244.3.0/24] 
	I0719 18:59:55.484618       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.169.0.19 Flags: [] Table: 0} 
	I0719 19:00:05.484422       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0719 19:00:05.484733       1 main.go:299] handling current node
	I0719 19:00:05.484900       1 main.go:295] Handling node with IPs: map[192.169.0.18:{}]
	I0719 19:00:05.485041       1 main.go:322] Node multinode-871000-m02 has CIDR [10.244.1.0/24] 
	I0719 19:00:05.485614       1 main.go:295] Handling node with IPs: map[192.169.0.19:{}]
	I0719 19:00:05.485806       1 main.go:322] Node multinode-871000-m03 has CIDR [10.244.3.0/24] 
	I0719 19:00:15.484143       1 main.go:295] Handling node with IPs: map[192.169.0.18:{}]
	I0719 19:00:15.484236       1 main.go:322] Node multinode-871000-m02 has CIDR [10.244.1.0/24] 
	I0719 19:00:15.484406       1 main.go:295] Handling node with IPs: map[192.169.0.19:{}]
	I0719 19:00:15.484518       1 main.go:322] Node multinode-871000-m03 has CIDR [10.244.3.0/24] 
	I0719 19:00:15.484642       1 main.go:295] Handling node with IPs: map[192.169.0.16:{}]
	I0719 19:00:15.484706       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a094a5e71d55] <==
	W0719 19:00:25.589579       1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.589614       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.589730       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.589966       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.590103       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.590246       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.590412       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.590493       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.590672       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.590841       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.590892       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.590916       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.590991       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.591117       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.591170       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.591263       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.591876       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.591986       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.591999       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.592104       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.592223       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.592367       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.592546       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.592604       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 19:00:25.593803       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [db6b52e15dbb] <==
	I0719 19:00:55.477422       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 19:00:55.477569       1 policy_source.go:224] refreshing policies
	I0719 19:00:55.477981       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 19:00:55.481903       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 19:00:55.495276       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 19:00:55.495394       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 19:00:55.497512       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 19:00:55.498149       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 19:00:55.498244       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 19:00:55.500004       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 19:00:55.500409       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 19:00:55.500962       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 19:00:55.501675       1 aggregator.go:165] initial CRD sync complete...
	I0719 19:00:55.501785       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 19:00:55.501811       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 19:00:55.501816       1 cache.go:39] Caches are synced for autoregister controller
	I0719 19:00:56.417045       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0719 19:00:56.608356       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.16]
	I0719 19:00:56.609412       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 19:00:56.612230       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 19:00:57.641657       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 19:00:57.726476       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 19:00:57.735206       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 19:00:57.776206       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 19:00:57.781406       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [487a7988900e] <==
	I0719 19:01:08.521447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.342991ms"
	I0719 19:01:08.521938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.430633ms"
	I0719 19:01:08.522275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.109µs"
	I0719 19:01:08.522501       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.157µs"
	I0719 19:01:08.839971       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 19:01:08.841258       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 19:01:08.841308       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 19:01:10.366730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m02"
	I0719 19:01:12.764948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.716709ms"
	I0719 19:01:12.765017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.861µs"
	I0719 19:01:12.783652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.687µs"
	I0719 19:01:12.802883       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="8.273591ms"
	I0719 19:01:12.803338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="33.067µs"
	I0719 19:01:32.837129       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.809332ms"
	I0719 19:01:32.841636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.465152ms"
	I0719 19:01:32.842027       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.778µs"
	I0719 19:01:36.142852       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m03"
	I0719 19:01:37.034652       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-871000-m02\" does not exist"
	I0719 19:01:37.036138       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m03"
	I0719 19:01:37.039204       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-871000-m02" podCIDRs=["10.244.1.0/24"]
	I0719 19:01:38.945606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.956µs"
	I0719 19:01:52.110099       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m02"
	I0719 19:02:02.982288       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.425µs"
	I0719 19:02:03.120895       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.578µs"
	I0719 19:02:03.123463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.023µs"
	
	
	==> kube-controller-manager [e5a9045d5578] <==
	I0719 18:55:36.436540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.675µs"
	I0719 18:55:39.845389       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0719 18:56:14.432684       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-871000-m02\" does not exist"
	I0719 18:56:14.446804       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-871000-m02" podCIDRs=["10.244.1.0/24"]
	I0719 18:56:14.850620       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-871000-m02"
	I0719 18:56:37.364174       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m02"
	I0719 18:56:39.385639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.208818ms"
	I0719 18:56:39.395340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.600908ms"
	I0719 18:56:39.405470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.694116ms"
	I0719 18:56:39.405540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.018µs"
	I0719 18:56:41.526553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.038038ms"
	I0719 18:56:41.526610       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.097µs"
	I0719 18:56:41.859416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.487319ms"
	I0719 18:56:41.859450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.728µs"
	I0719 18:57:03.407956       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m02"
	I0719 18:57:03.408246       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-871000-m03\" does not exist"
	I0719 18:57:03.421054       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-871000-m03" podCIDRs=["10.244.2.0/24"]
	I0719 18:57:04.868120       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-871000-m03"
	I0719 18:57:26.591944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m02"
	I0719 18:58:14.892242       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m02"
	I0719 18:59:52.063091       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m02"
	I0719 18:59:53.182287       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-871000-m03\" does not exist"
	I0719 18:59:53.182360       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m02"
	I0719 18:59:53.192189       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-871000-m03" podCIDRs=["10.244.3.0/24"]
	I0719 19:00:11.172838       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-871000-m02"
	
	
	==> kube-proxy [5d2d94a02ef7] <==
	I0719 19:00:57.164355       1 server_linux.go:69] "Using iptables proxy"
	I0719 19:00:57.177001       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.16"]
	I0719 19:00:57.231327       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 19:00:57.231357       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 19:00:57.231372       1 server_linux.go:165] "Using iptables Proxier"
	I0719 19:00:57.233600       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 19:00:57.233917       1 server.go:872] "Version info" version="v1.30.3"
	I0719 19:00:57.233929       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:00:57.236066       1 config.go:192] "Starting service config controller"
	I0719 19:00:57.236348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 19:00:57.236487       1 config.go:101] "Starting endpoint slice config controller"
	I0719 19:00:57.236572       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 19:00:57.237614       1 config.go:319] "Starting node config controller"
	I0719 19:00:57.239175       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 19:00:57.337482       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 19:00:57.337533       1 shared_informer.go:320] Caches are synced for service config
	I0719 19:00:57.339414       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a2327b8c83c0] <==
	I0719 18:55:21.239166       1 server_linux.go:69] "Using iptables proxy"
	I0719 18:55:21.259836       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.16"]
	I0719 18:55:21.338834       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:55:21.338857       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:55:21.338868       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:55:21.341166       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:55:21.344541       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:55:21.344553       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:55:21.346671       1 config.go:192] "Starting service config controller"
	I0719 18:55:21.346681       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:55:21.346709       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:55:21.346714       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:55:21.346958       1 config.go:319] "Starting node config controller"
	I0719 18:55:21.346962       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 18:55:21.447909       1 shared_informer.go:320] Caches are synced for node config
	I0719 18:55:21.447929       1 shared_informer.go:320] Caches are synced for service config
	I0719 18:55:21.447947       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a69e88441e03] <==
	E0719 18:55:03.475883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 18:55:03.476949       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 18:55:03.477031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 18:55:03.477161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 18:55:03.477276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 18:55:03.477395       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 18:55:03.477511       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 18:55:03.477641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 18:55:03.477735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:55:03.477767       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:55:03.477816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 18:55:03.477870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 18:55:03.478003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 18:55:04.301778       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 18:55:04.301877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 18:55:04.308833       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 18:55:04.308971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 18:55:04.334337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 18:55:04.334751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 18:55:04.520567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 18:55:04.520605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 18:55:04.705768       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 18:55:04.705810       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 18:55:06.457005       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 19:00:24.570339       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d1e31a8cb005] <==
	I0719 19:00:53.626264       1 serving.go:380] Generated self-signed cert in-memory
	W0719 19:00:55.446286       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 19:00:55.446324       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 19:00:55.446334       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 19:00:55.446340       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 19:00:55.478526       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 19:00:55.478651       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:00:55.482947       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 19:00:55.483070       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 19:00:55.483057       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 19:00:55.485023       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 19:00:55.585889       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 19:01:03 multinode-871000 kubelet[1370]: E0719 19:01:03.942759    1370 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7d62ec5-693b-46ab-9437-86aef8b469e8-config-volume podName:c7d62ec5-693b-46ab-9437-86aef8b469e8 nodeName:}" failed. No retries permitted until 2024-07-19 19:01:11.942740444 +0000 UTC m=+19.682392960 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c7d62ec5-693b-46ab-9437-86aef8b469e8-config-volume") pod "coredns-7db6d8ff4d-85r26" (UID: "c7d62ec5-693b-46ab-9437-86aef8b469e8") : object "kube-system"/"coredns" not registered
	Jul 19 19:01:04 multinode-871000 kubelet[1370]: E0719 19:01:04.043761    1370 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jul 19 19:01:04 multinode-871000 kubelet[1370]: E0719 19:01:04.043887    1370 projected.go:200] Error preparing data for projected volume kube-api-access-s4hxj for pod default/busybox-fc5497c4f-4vlzm: object "default"/"kube-root-ca.crt" not registered
	Jul 19 19:01:04 multinode-871000 kubelet[1370]: E0719 19:01:04.043949    1370 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f263090-f3e1-4fa6-b77e-c708e3e5f49d-kube-api-access-s4hxj podName:9f263090-f3e1-4fa6-b77e-c708e3e5f49d nodeName:}" failed. No retries permitted until 2024-07-19 19:01:12.043932224 +0000 UTC m=+19.783584738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s4hxj" (UniqueName: "kubernetes.io/projected/9f263090-f3e1-4fa6-b77e-c708e3e5f49d-kube-api-access-s4hxj") pod "busybox-fc5497c4f-4vlzm" (UID: "9f263090-f3e1-4fa6-b77e-c708e3e5f49d") : object "default"/"kube-root-ca.crt" not registered
	Jul 19 19:01:04 multinode-871000 kubelet[1370]: E0719 19:01:04.422354    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-85r26" podUID="c7d62ec5-693b-46ab-9437-86aef8b469e8"
	Jul 19 19:01:05 multinode-871000 kubelet[1370]: E0719 19:01:05.422085    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-4vlzm" podUID="9f263090-f3e1-4fa6-b77e-c708e3e5f49d"
	Jul 19 19:01:06 multinode-871000 kubelet[1370]: E0719 19:01:06.422516    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-85r26" podUID="c7d62ec5-693b-46ab-9437-86aef8b469e8"
	Jul 19 19:01:07 multinode-871000 kubelet[1370]: E0719 19:01:07.421875    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-4vlzm" podUID="9f263090-f3e1-4fa6-b77e-c708e3e5f49d"
	Jul 19 19:01:08 multinode-871000 kubelet[1370]: E0719 19:01:08.422721    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-85r26" podUID="c7d62ec5-693b-46ab-9437-86aef8b469e8"
	Jul 19 19:01:09 multinode-871000 kubelet[1370]: E0719 19:01:09.421926    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-4vlzm" podUID="9f263090-f3e1-4fa6-b77e-c708e3e5f49d"
	Jul 19 19:01:10 multinode-871000 kubelet[1370]: I0719 19:01:10.358659    1370 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jul 19 19:01:27 multinode-871000 kubelet[1370]: I0719 19:01:27.889385    1370 scope.go:117] "RemoveContainer" containerID="5a07c503ef107083bee3b07811f777c61f991eb38b50d287dfaeca6c1d83b4a3"
	Jul 19 19:01:27 multinode-871000 kubelet[1370]: I0719 19:01:27.889652    1370 scope.go:117] "RemoveContainer" containerID="2c75948ee72dd9604b9f72f9045d73c3b8ae7526147229be4b9ff18692570469"
	Jul 19 19:01:27 multinode-871000 kubelet[1370]: E0719 19:01:27.889755    1370 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ccd0aaec-abf0-4aec-9ebf-14f619510aeb)\"" pod="kube-system/storage-provisioner" podUID="ccd0aaec-abf0-4aec-9ebf-14f619510aeb"
	Jul 19 19:01:42 multinode-871000 kubelet[1370]: I0719 19:01:42.422872    1370 scope.go:117] "RemoveContainer" containerID="2c75948ee72dd9604b9f72f9045d73c3b8ae7526147229be4b9ff18692570469"
	Jul 19 19:01:52 multinode-871000 kubelet[1370]: E0719 19:01:52.439783    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:01:52 multinode-871000 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:01:52 multinode-871000 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:01:52 multinode-871000 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:01:52 multinode-871000 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 19:02:52 multinode-871000 kubelet[1370]: E0719 19:02:52.443691    1370 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 19:02:52 multinode-871000 kubelet[1370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 19:02:52 multinode-871000 kubelet[1370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 19:02:52 multinode-871000 kubelet[1370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 19:02:52 multinode-871000 kubelet[1370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-871000 -n multinode-871000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-871000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-hlwxl
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-871000 describe pod busybox-fc5497c4f-hlwxl
helpers_test.go:282: (dbg) kubectl --context multinode-871000 describe pod busybox-fc5497c4f-hlwxl:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-hlwxl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             multinode-871000-m03/
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vmgnz (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-vmgnz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2m4s  default-scheduler  Successfully assigned default/busybox-fc5497c4f-hlwxl to multinode-871000-m03

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (204.09s)

                                                
                                    

Test pass (320/344)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.3/json-events 6.51
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.29
18 TestDownloadOnly/v1.30.3/DeleteAll 0.23
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 6.52
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.25
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
30 TestBinaryMirror 0.92
31 TestOffline 97.25
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.17
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 212.26
38 TestAddons/parallel/Registry 14.24
39 TestAddons/parallel/Ingress 19.14
40 TestAddons/parallel/InspektorGadget 10.52
41 TestAddons/parallel/MetricsServer 5.49
42 TestAddons/parallel/HelmTiller 9.91
44 TestAddons/parallel/CSI 50.48
45 TestAddons/parallel/Headlamp 13.02
46 TestAddons/parallel/CloudSpanner 5.38
47 TestAddons/parallel/LocalPath 58.41
48 TestAddons/parallel/NvidiaDevicePlugin 5.37
49 TestAddons/parallel/Yakd 5
50 TestAddons/parallel/Volcano 40.14
53 TestAddons/serial/GCPAuth/Namespaces 0.1
54 TestAddons/StoppedEnableDisable 5.93
55 TestCertOptions 44.27
56 TestCertExpiration 377.88
57 TestDockerFlags 43.02
58 TestForceSystemdFlag 38.74
59 TestForceSystemdEnv 41.76
62 TestHyperKitDriverInstallOrUpdate 9.07
65 TestErrorSpam/setup 38.72
66 TestErrorSpam/start 1.58
67 TestErrorSpam/status 0.5
68 TestErrorSpam/pause 1.37
69 TestErrorSpam/unpause 1.3
70 TestErrorSpam/stop 155.81
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 165.86
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 41.38
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.03
82 TestFunctional/serial/CacheCmd/cache/add_local 1.33
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.04
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.13
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.44
90 TestFunctional/serial/ExtraConfig 39.58
91 TestFunctional/serial/ComponentHealth 0.05
92 TestFunctional/serial/LogsCmd 2.59
93 TestFunctional/serial/LogsFileCmd 2.82
94 TestFunctional/serial/InvalidService 3.76
96 TestFunctional/parallel/ConfigCmd 0.51
97 TestFunctional/parallel/DashboardCmd 13.71
98 TestFunctional/parallel/DryRun 1.07
99 TestFunctional/parallel/InternationalLanguage 0.46
100 TestFunctional/parallel/StatusCmd 0.51
104 TestFunctional/parallel/ServiceCmdConnect 15.41
105 TestFunctional/parallel/AddonsCmd 0.22
106 TestFunctional/parallel/PersistentVolumeClaim 27.44
108 TestFunctional/parallel/SSHCmd 0.3
109 TestFunctional/parallel/CpCmd 0.94
110 TestFunctional/parallel/MySQL 27.53
111 TestFunctional/parallel/FileSync 0.15
112 TestFunctional/parallel/CertSync 0.97
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
120 TestFunctional/parallel/License 0.43
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.13
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
132 TestFunctional/parallel/Version/short 0.1
133 TestFunctional/parallel/Version/components 0.56
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.18
138 TestFunctional/parallel/ImageCommands/ImageBuild 2.1
139 TestFunctional/parallel/ImageCommands/Setup 1.75
140 TestFunctional/parallel/ServiceCmd/DeployApp 7.17
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.96
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.62
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.37
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.28
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.31
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.48
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.31
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.25
149 TestFunctional/parallel/ProfileCmd/profile_list 0.26
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
151 TestFunctional/parallel/MountCmd/any-port 6.01
152 TestFunctional/parallel/ServiceCmd/List 0.38
153 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
154 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
155 TestFunctional/parallel/ServiceCmd/Format 0.27
156 TestFunctional/parallel/ServiceCmd/URL 0.25
157 TestFunctional/parallel/MountCmd/specific-port 1.7
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.69
159 TestFunctional/parallel/DockerEnv/bash 0.59
160 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
161 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
162 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 435.47
170 TestMultiControlPlane/serial/DeployApp 4.92
171 TestMultiControlPlane/serial/PingHostFromPods 1.27
172 TestMultiControlPlane/serial/AddWorkerNode 56.81
173 TestMultiControlPlane/serial/NodeLabels 0.05
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.34
175 TestMultiControlPlane/serial/CopyFile 9.09
176 TestMultiControlPlane/serial/StopSecondaryNode 8.77
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.26
178 TestMultiControlPlane/serial/RestartSecondaryNode 156.82
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.33
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 209.36
181 TestMultiControlPlane/serial/DeleteSecondaryNode 8.06
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.25
183 TestMultiControlPlane/serial/StopCluster 24.92
184 TestMultiControlPlane/serial/RestartCluster 243.36
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.25
186 TestMultiControlPlane/serial/AddSecondaryNode 77.41
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.33
190 TestImageBuild/serial/Setup 38.62
191 TestImageBuild/serial/NormalBuild 1.22
192 TestImageBuild/serial/BuildWithBuildArg 0.5
193 TestImageBuild/serial/BuildWithDockerIgnore 0.25
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.22
198 TestJSONOutput/start/Command 52.33
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.48
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.45
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 8.32
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.58
226 TestMainNoArgs 0.08
227 TestMinikubeProfile 91.42
230 TestMountStart/serial/StartWithMountFirst 21.21
231 TestMountStart/serial/VerifyMountFirst 0.29
235 TestMultiNode/serial/FreshStart2Nodes 125.85
236 TestMultiNode/serial/DeployApp2Nodes 4.4
237 TestMultiNode/serial/PingHostFrom2Pods 0.88
238 TestMultiNode/serial/AddNode 44.81
239 TestMultiNode/serial/MultiNodeLabels 0.05
240 TestMultiNode/serial/ProfileList 0.19
241 TestMultiNode/serial/CopyFile 5.27
242 TestMultiNode/serial/StopNode 2.82
243 TestMultiNode/serial/StartAfterStop 156.1
245 TestMultiNode/serial/DeleteNode 9.16
246 TestMultiNode/serial/StopMultiNode 16.8
247 TestMultiNode/serial/RestartMultiNode 112.56
248 TestMultiNode/serial/ValidateNameConflict 46.08
252 TestPreload 205.61
254 TestScheduledStopUnix 110.03
255 TestSkaffold 227.97
258 TestRunningBinaryUpgrade 83.6
260 TestKubernetesUpgrade 121.56
273 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.4
274 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.21
275 TestStoppedBinaryUpgrade/Setup 1.03
276 TestStoppedBinaryUpgrade/Upgrade 83.2
277 TestStoppedBinaryUpgrade/MinikubeLogs 3.29
279 TestPause/serial/Start 90.51
288 TestNoKubernetes/serial/StartNoK8sWithVersion 0.46
289 TestNoKubernetes/serial/StartWithK8s 52.2
290 TestNoKubernetes/serial/StartWithStopK8s 8.57
291 TestNoKubernetes/serial/Start 21.17
292 TestNoKubernetes/serial/VerifyK8sNotRunning 0.12
293 TestNoKubernetes/serial/ProfileList 0.46
294 TestNoKubernetes/serial/Stop 8.43
295 TestPause/serial/SecondStartNoReconfiguration 41.69
296 TestNoKubernetes/serial/StartNoArgs 19.68
297 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
298 TestNetworkPlugins/group/auto/Start 56.3
299 TestPause/serial/Pause 0.57
300 TestPause/serial/VerifyStatus 0.16
301 TestPause/serial/Unpause 0.54
302 TestPause/serial/PauseAgain 0.59
303 TestPause/serial/DeletePaused 5.24
304 TestPause/serial/VerifyDeletedResources 0.21
305 TestNetworkPlugins/group/kindnet/Start 72.71
306 TestNetworkPlugins/group/auto/KubeletFlags 0.15
307 TestNetworkPlugins/group/auto/NetCatPod 11.13
308 TestNetworkPlugins/group/auto/DNS 0.13
309 TestNetworkPlugins/group/auto/Localhost 0.1
310 TestNetworkPlugins/group/auto/HairPin 0.1
311 TestNetworkPlugins/group/calico/Start 72.29
312 TestNetworkPlugins/group/kindnet/ControllerPod 6
313 TestNetworkPlugins/group/kindnet/KubeletFlags 0.16
314 TestNetworkPlugins/group/kindnet/NetCatPod 10.16
315 TestNetworkPlugins/group/kindnet/DNS 0.13
316 TestNetworkPlugins/group/kindnet/Localhost 0.1
317 TestNetworkPlugins/group/kindnet/HairPin 0.1
318 TestNetworkPlugins/group/custom-flannel/Start 177.98
319 TestNetworkPlugins/group/calico/ControllerPod 6
320 TestNetworkPlugins/group/calico/KubeletFlags 0.16
321 TestNetworkPlugins/group/calico/NetCatPod 11.12
322 TestNetworkPlugins/group/calico/DNS 0.13
323 TestNetworkPlugins/group/calico/Localhost 0.1
324 TestNetworkPlugins/group/calico/HairPin 0.1
325 TestNetworkPlugins/group/false/Start 55.25
326 TestNetworkPlugins/group/false/KubeletFlags 0.15
327 TestNetworkPlugins/group/false/NetCatPod 12.13
328 TestNetworkPlugins/group/false/DNS 0.13
329 TestNetworkPlugins/group/false/Localhost 0.11
330 TestNetworkPlugins/group/false/HairPin 0.1
331 TestNetworkPlugins/group/enable-default-cni/Start 52.62
332 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
333 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.13
334 TestNetworkPlugins/group/custom-flannel/DNS 0.13
335 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
336 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.16
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.14
339 TestNetworkPlugins/group/flannel/Start 62.62
340 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
341 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
342 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
343 TestNetworkPlugins/group/bridge/Start 166.2
344 TestNetworkPlugins/group/flannel/ControllerPod 6.01
345 TestNetworkPlugins/group/flannel/KubeletFlags 0.15
346 TestNetworkPlugins/group/flannel/NetCatPod 11.13
347 TestNetworkPlugins/group/flannel/DNS 0.11
348 TestNetworkPlugins/group/flannel/Localhost 0.11
349 TestNetworkPlugins/group/flannel/HairPin 0.09
350 TestNetworkPlugins/group/kubenet/Start 92
351 TestNetworkPlugins/group/bridge/KubeletFlags 0.15
352 TestNetworkPlugins/group/bridge/NetCatPod 10.13
353 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
354 TestNetworkPlugins/group/kubenet/NetCatPod 11.14
355 TestNetworkPlugins/group/bridge/DNS 0.12
356 TestNetworkPlugins/group/bridge/Localhost 0.1
357 TestNetworkPlugins/group/bridge/HairPin 0.1
358 TestNetworkPlugins/group/kubenet/DNS 0.12
359 TestNetworkPlugins/group/kubenet/Localhost 0.09
360 TestNetworkPlugins/group/kubenet/HairPin 0.1
362 TestStartStop/group/old-k8s-version/serial/FirstStart 173.19
364 TestStartStop/group/no-preload/serial/FirstStart 210.23
365 TestStartStop/group/old-k8s-version/serial/DeployApp 9.33
366 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.74
367 TestStartStop/group/old-k8s-version/serial/Stop 8.39
368 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
369 TestStartStop/group/old-k8s-version/serial/SecondStart 403.76
370 TestStartStop/group/no-preload/serial/DeployApp 8.21
371 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.84
372 TestStartStop/group/no-preload/serial/Stop 8.45
373 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.35
374 TestStartStop/group/no-preload/serial/SecondStart 292.45
375 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
377 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
378 TestStartStop/group/no-preload/serial/Pause 1.91
380 TestStartStop/group/embed-certs/serial/FirstStart 89.71
381 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
383 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.16
384 TestStartStop/group/old-k8s-version/serial/Pause 1.91
386 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.32
387 TestStartStop/group/embed-certs/serial/DeployApp 8.22
388 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.78
389 TestStartStop/group/embed-certs/serial/Stop 8.49
390 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.33
391 TestStartStop/group/embed-certs/serial/SecondStart 290.02
392 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.2
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.75
394 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.41
395 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
396 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 312.16
397 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
398 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
399 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
400 TestStartStop/group/embed-certs/serial/Pause 1.93
402 TestStartStop/group/newest-cni/serial/FirstStart 41.58
403 TestStartStop/group/newest-cni/serial/DeployApp 0
404 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
405 TestStartStop/group/newest-cni/serial/Stop 8.47
406 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
407 TestStartStop/group/newest-cni/serial/SecondStart 29.97
408 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
409 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
411 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.17
412 TestStartStop/group/newest-cni/serial/Pause 1.83
413 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
414 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.18
415 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.95
x
+
TestDownloadOnly/v1.20.0/json-events (12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-911000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-911000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (11.997129834s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-911000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-911000: exit status 85 (299.004118ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-911000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT |          |
	|         | -p download-only-911000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 11:13:03
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 11:13:03.198772    1594 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:13:03.199049    1594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:13:03.199055    1594 out.go:304] Setting ErrFile to fd 2...
	I0719 11:13:03.199059    1594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:13:03.199236    1594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	W0719 11:13:03.199334    1594 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19307-1053/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19307-1053/.minikube/config/config.json: no such file or directory
	I0719 11:13:03.201066    1594 out.go:298] Setting JSON to true
	I0719 11:13:03.224520    1594 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":753,"bootTime":1721412030,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0719 11:13:03.224624    1594 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:13:03.247573    1594 out.go:97] [download-only-911000] minikube v1.33.1 on Darwin 14.5
	I0719 11:13:03.247677    1594 notify.go:220] Checking for updates...
	W0719 11:13:03.247686    1594 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 11:13:03.267568    1594 out.go:169] MINIKUBE_LOCATION=19307
	I0719 11:13:03.289535    1594 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 11:13:03.310797    1594 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 11:13:03.332634    1594 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:13:03.353728    1594 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	W0719 11:13:03.402661    1594 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 11:13:03.403117    1594 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:13:03.451220    1594 out.go:97] Using the hyperkit driver based on user configuration
	I0719 11:13:03.451246    1594 start.go:297] selected driver: hyperkit
	I0719 11:13:03.451253    1594 start.go:901] validating driver "hyperkit" against <nil>
	I0719 11:13:03.451360    1594 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:13:03.451526    1594 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19307-1053/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0719 11:13:03.865378    1594 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0719 11:13:03.870070    1594 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:13:03.870091    1594 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0719 11:13:03.870118    1594 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:13:03.874673    1594 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0719 11:13:03.875402    1594 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 11:13:03.875464    1594 cni.go:84] Creating CNI manager for ""
	I0719 11:13:03.875483    1594 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 11:13:03.875552    1594 start.go:340] cluster config:
	{Name:download-only-911000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:13:03.875771    1594 iso.go:125] acquiring lock: {Name:mkefd37d87f1d623b7fad18d7afa6e68e29a5c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:13:03.897529    1594 out.go:97] Downloading VM boot image ...
	I0719 11:13:03.897628    1594 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 11:13:08.558974    1594 out.go:97] Starting "download-only-911000" primary control-plane node in "download-only-911000" cluster
	I0719 11:13:08.559051    1594 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 11:13:08.613631    1594 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0719 11:13:08.613659    1594 cache.go:56] Caching tarball of preloaded images
	I0719 11:13:08.614270    1594 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 11:13:08.635053    1594 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 11:13:08.635094    1594 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 11:13:08.713216    1594 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-911000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-911000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-911000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (6.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-699000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-699000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit : (6.51443162s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (6.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-699000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-699000: exit status 85 (293.173314ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-911000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT |                     |
	|         | -p download-only-911000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| delete  | -p download-only-911000        | download-only-911000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| start   | -o=json --download-only        | download-only-699000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT |                     |
	|         | -p download-only-699000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 11:13:15
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 11:13:15.933892    1622 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:13:15.934078    1622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:13:15.934084    1622 out.go:304] Setting ErrFile to fd 2...
	I0719 11:13:15.934088    1622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:13:15.934268    1622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	I0719 11:13:15.935648    1622 out.go:298] Setting JSON to true
	I0719 11:13:15.961157    1622 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":765,"bootTime":1721412030,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0719 11:13:15.961257    1622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:13:15.983869    1622 out.go:97] [download-only-699000] minikube v1.33.1 on Darwin 14.5
	I0719 11:13:15.984062    1622 notify.go:220] Checking for updates...
	I0719 11:13:16.004706    1622 out.go:169] MINIKUBE_LOCATION=19307
	I0719 11:13:16.025729    1622 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 11:13:16.046730    1622 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 11:13:16.088726    1622 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:13:16.132712    1622 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	W0719 11:13:16.174644    1622 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 11:13:16.175101    1622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:13:16.204887    1622 out.go:97] Using the hyperkit driver based on user configuration
	I0719 11:13:16.204914    1622 start.go:297] selected driver: hyperkit
	I0719 11:13:16.204920    1622 start.go:901] validating driver "hyperkit" against <nil>
	I0719 11:13:16.205036    1622 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:13:16.205135    1622 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19307-1053/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0719 11:13:16.213836    1622 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0719 11:13:16.218045    1622 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:13:16.218066    1622 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0719 11:13:16.218093    1622 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:13:16.220960    1622 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0719 11:13:16.221106    1622 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 11:13:16.221130    1622 cni.go:84] Creating CNI manager for ""
	I0719 11:13:16.221155    1622 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:13:16.221163    1622 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 11:13:16.221236    1622 start.go:340] cluster config:
	{Name:download-only-699000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-699000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:13:16.221326    1622 iso.go:125] acquiring lock: {Name:mkefd37d87f1d623b7fad18d7afa6e68e29a5c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:13:16.242686    1622 out.go:97] Starting "download-only-699000" primary control-plane node in "download-only-699000" cluster
	I0719 11:13:16.242704    1622 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:13:16.296472    1622 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 11:13:16.296488    1622 cache.go:56] Caching tarball of preloaded images
	I0719 11:13:16.296674    1622 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:13:16.317821    1622 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 11:13:16.317830    1622 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0719 11:13:16.395639    1622 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 11:13:20.712400    1622 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0719 11:13:20.712571    1622 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-699000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-699000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-699000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (6.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-064000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-064000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit : (6.516613578s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (6.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-064000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-064000: exit status 85 (293.315165ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-911000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT |                     |
	|         | -p download-only-911000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| delete  | -p download-only-911000             | download-only-911000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| start   | -o=json --download-only             | download-only-699000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT |                     |
	|         | -p download-only-699000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| delete  | -p download-only-699000             | download-only-699000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| start   | -o=json --download-only             | download-only-064000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT |                     |
	|         | -p download-only-064000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 11:13:23
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 11:13:23.179769    1650 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:13:23.179968    1650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:13:23.179973    1650 out.go:304] Setting ErrFile to fd 2...
	I0719 11:13:23.179976    1650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:13:23.180157    1650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	I0719 11:13:23.181567    1650 out.go:298] Setting JSON to true
	I0719 11:13:23.207013    1650 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":773,"bootTime":1721412030,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0719 11:13:23.207101    1650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:13:23.228444    1650 out.go:97] [download-only-064000] minikube v1.33.1 on Darwin 14.5
	I0719 11:13:23.228627    1650 notify.go:220] Checking for updates...
	I0719 11:13:23.249266    1650 out.go:169] MINIKUBE_LOCATION=19307
	I0719 11:13:23.270476    1650 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 11:13:23.291552    1650 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 11:13:23.312404    1650 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:13:23.333502    1650 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	W0719 11:13:23.375610    1650 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 11:13:23.376112    1650 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:13:23.406413    1650 out.go:97] Using the hyperkit driver based on user configuration
	I0719 11:13:23.406455    1650 start.go:297] selected driver: hyperkit
	I0719 11:13:23.406467    1650 start.go:901] validating driver "hyperkit" against <nil>
	I0719 11:13:23.406661    1650 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:13:23.406852    1650 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19307-1053/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0719 11:13:23.416193    1650 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0719 11:13:23.420390    1650 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:13:23.420410    1650 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0719 11:13:23.420433    1650 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:13:23.423247    1650 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0719 11:13:23.423411    1650 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 11:13:23.423434    1650 cni.go:84] Creating CNI manager for ""
	I0719 11:13:23.423451    1650 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:13:23.423458    1650 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 11:13:23.423529    1650 start.go:340] cluster config:
	{Name:download-only-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:13:23.423614    1650 iso.go:125] acquiring lock: {Name:mkefd37d87f1d623b7fad18d7afa6e68e29a5c1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:13:23.444492    1650 out.go:97] Starting "download-only-064000" primary control-plane node in "download-only-064000" cluster
	I0719 11:13:23.444513    1650 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 11:13:23.493788    1650 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 11:13:23.493830    1650 cache.go:56] Caching tarball of preloaded images
	I0719 11:13:23.494200    1650 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 11:13:23.516410    1650 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 11:13:23.516419    1650 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 11:13:23.591264    1650 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 11:13:27.745760    1650 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 11:13:27.746032    1650 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19307-1053/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-064000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-064000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-064000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.92s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-701000 --alsologtostderr --binary-mirror http://127.0.0.1:49633 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-701000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-701000
--- PASS: TestBinaryMirror (0.92s)

                                                
                                    
x
+
TestOffline (97.25s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-636000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-636000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (1m32.019337238s)
helpers_test.go:175: Cleaning up "offline-docker-636000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-636000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-636000: (5.2343519s)
--- PASS: TestOffline (97.25s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-910000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-910000: exit status 85 (166.647839ms)

                                                
                                                
-- stdout --
	* Profile "addons-910000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-910000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-910000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-910000: exit status 85 (186.933115ms)

                                                
                                                
-- stdout --
	* Profile "addons-910000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-910000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (212.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-910000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-910000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.264231696s)
--- PASS: TestAddons/Setup (212.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 10.107299ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-qf8m8" [cb1235ed-7236-4ce4-b0e2-21bd8cea7f6b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005061277s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jq462" [39df25c3-06ec-4be4-9d61-c6834fc779f8] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003363751s
addons_test.go:342: (dbg) Run:  kubectl --context addons-910000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-910000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-910000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.590051938s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 ip
2024/07/19 11:17:18 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.24s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-910000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-910000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-910000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4dbad50d-4f26-4894-bdd3-9afe9177d637] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4dbad50d-4f26-4894-bdd3-9afe9177d637] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005555143s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-910000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-910000 addons disable ingress --alsologtostderr -v=1: (7.516175199s)
--- PASS: TestAddons/parallel/Ingress (19.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.52s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7bjcr" [0ea522e1-3b7c-42f0-bf6d-da47faca58ac] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003432819s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-910000
addons_test.go:843: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-910000: (5.512413662s)
--- PASS: TestAddons/parallel/InspektorGadget (10.52s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.860538ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-c4b2g" [ecf0ffbc-c0c7-4ae3-aa30-df0d1d6ecc51] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004945811s
addons_test.go:417: (dbg) Run:  kubectl --context addons-910000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.49s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.91s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.75176ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-zjjpd" [f5ea007a-1eff-489d-afa4-e87c8c06dfc9] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003073914s
addons_test.go:475: (dbg) Run:  kubectl --context addons-910000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-910000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.465808602s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 3.222148ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-910000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-910000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d7f5b8ec-6b77-4cf2-a321-8e72d4a01841] Pending
helpers_test.go:344: "task-pv-pod" [d7f5b8ec-6b77-4cf2-a321-8e72d4a01841] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d7f5b8ec-6b77-4cf2-a321-8e72d4a01841] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.002236724s
addons_test.go:586: (dbg) Run:  kubectl --context addons-910000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-910000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-910000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-910000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-910000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-910000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-910000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d1de69e8-2a37-4f12-959e-dcdfb46fa97b] Pending
helpers_test.go:344: "task-pv-pod-restore" [d1de69e8-2a37-4f12-959e-dcdfb46fa97b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d1de69e8-2a37-4f12-959e-dcdfb46fa97b] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.002725446s
addons_test.go:628: (dbg) Run:  kubectl --context addons-910000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-910000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-910000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-amd64 -p addons-910000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.463861762s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-910000 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-910000 --alsologtostderr -v=1: (1.013337s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-8p6g2" [f1679474-c87b-4379-ad50-85741212d949] Pending
helpers_test.go:344: "headlamp-7867546754-8p6g2" [f1679474-c87b-4379-ad50-85741212d949] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-8p6g2" [f1679474-c87b-4379-ad50-85741212d949] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004829823s
--- PASS: TestAddons/parallel/Headlamp (13.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-ccnp7" [a85066d7-cffa-496b-83da-2c38c5ef8d5c] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002311741s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-910000
--- PASS: TestAddons/parallel/CloudSpanner (5.38s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-910000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-910000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-910000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c506d13b-aed0-4b4d-be96-8defd1b3cdd9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c506d13b-aed0-4b4d-be96-8defd1b3cdd9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c506d13b-aed0-4b4d-be96-8defd1b3cdd9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.00382872s
addons_test.go:992: (dbg) Run:  kubectl --context addons-910000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 ssh "cat /opt/local-path-provisioner/pvc-19e22e7e-1909-4115-b64b-82161b1f2423_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-910000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-910000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-amd64 -p addons-910000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.741585415s)
--- PASS: TestAddons/parallel/LocalPath (58.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fp9cg" [5f5f4096-0919-48f9-883a-8fe6c4edac65] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00759709s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-910000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.37s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-hv9qh" [6cac2b83-f89f-4140-ba4a-6af4d4564a52] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003517493s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (40.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 1.731944ms
addons_test.go:905: volcano-controller stabilized in 1.95654ms
addons_test.go:897: volcano-admission stabilized in 2.18761ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-fjcck" [0782725f-8e12-4db5-8138-c9ec481c3fc7] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.003324692s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-nrkhw" [42e08cb7-aaaf-405d-b7dd-b27c1ea7acaf] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.002691313s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-skl5f" [93a9d563-ed04-49a3-86e0-715744a80ced] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.002467669s
addons_test.go:924: (dbg) Run:  kubectl --context addons-910000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-910000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-910000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [9c6ec90d-7a68-4bd7-91d8-ce69665da38c] Pending
helpers_test.go:344: "test-job-nginx-0" [9c6ec90d-7a68-4bd7-91d8-ce69665da38c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [9c6ec90d-7a68-4bd7-91d8-ce69665da38c] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 15.00247022s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-amd64 -p addons-910000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-amd64 -p addons-910000 addons disable volcano --alsologtostderr -v=1: (9.898041541s)
--- PASS: TestAddons/parallel/Volcano (40.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-910000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-910000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-910000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-910000: (5.394035841s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-910000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-910000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-910000
--- PASS: TestAddons/StoppedEnableDisable (5.93s)

                                                
                                    
x
+
TestCertOptions (44.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-533000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-533000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (38.67508719s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-533000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-533000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-533000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-533000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-533000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-533000: (5.246687403s)
--- PASS: TestCertOptions (44.27s)

                                                
                                    
x
+
TestCertExpiration (377.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-823000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-823000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (2m43.88499964s)
E0719 12:20:43.280001    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:43.285148    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:43.296259    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:43.316512    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:43.356811    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:43.437349    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:43.598869    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:43.919225    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:44.561146    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:45.841875    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:48.402240    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:20:53.524274    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:21:03.765320    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-823000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-823000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (28.752590724s)
helpers_test.go:175: Cleaning up "cert-expiration-823000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-823000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-823000: (5.236802255s)
--- PASS: TestCertExpiration (377.88s)

                                                
                                    
x
+
TestDockerFlags (43.02s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-563000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-563000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (37.474247051s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-563000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-563000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-563000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-563000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-563000: (5.237006856s)
--- PASS: TestDockerFlags (43.02s)

                                                
                                    
x
+
TestForceSystemdFlag (38.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-993000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
E0719 12:17:12.190014    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-993000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (35.201411167s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-993000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-993000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-993000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-993000: (3.372414779s)
--- PASS: TestForceSystemdFlag (38.74s)

                                                
                                    
x
+
TestForceSystemdEnv (41.76s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-719000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-719000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (38.129549827s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-719000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-719000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-719000
E0719 12:17:04.468945    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-719000: (3.461262975s)
--- PASS: TestForceSystemdEnv (41.76s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.07s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.07s)

                                                
                                    
x
+
TestErrorSpam/setup (38.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-241000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-241000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 --driver=hyperkit : (38.724514586s)
--- PASS: TestErrorSpam/setup (38.72s)

                                                
                                    
x
+
TestErrorSpam/start (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 start --dry-run
--- PASS: TestErrorSpam/start (1.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.5s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 status
--- PASS: TestErrorSpam/status (0.50s)

                                                
                                    
x
+
TestErrorSpam/pause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 pause
--- PASS: TestErrorSpam/pause (1.37s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 unpause
--- PASS: TestErrorSpam/unpause (1.30s)

                                                
                                    
x
+
TestErrorSpam/stop (155.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 stop: (5.354264194s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 stop: (1m15.22837536s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 stop
E0719 11:22:04.378113    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:04.385468    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:04.397665    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:04.418205    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:04.460401    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:04.540946    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:04.702224    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:05.024249    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:05.666532    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:06.946738    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:09.506894    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:14.626965    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:24.866975    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:22:45.346991    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-241000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-241000 stop: (1m15.222711374s)
--- PASS: TestErrorSpam/stop (155.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19307-1053/.minikube/files/etc/test/nested/copy/1592/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (165.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-462000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0719 11:23:26.307153    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:24:48.247204    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-462000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (2m45.859402109s)
--- PASS: TestFunctional/serial/StartWithProxy (165.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-462000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-462000 --alsologtostderr -v=8: (41.377819542s)
functional_test.go:659: soft start took 41.378444753s for "functional-462000" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-462000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-462000 cache add registry.k8s.io/pause:3.1: (1.110202199s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-462000 cache add registry.k8s.io/pause:3.3: (1.02964348s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local1546519334/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 cache add minikube-local-cache-test:functional-462000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 cache delete minikube-local-cache-test:functional-462000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-462000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-462000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (148.049704ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 kubectl -- --context functional-462000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-462000 kubectl -- --context functional-462000 get pods: (1.129918748s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-462000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-462000 get pods: (1.441483934s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.44s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-462000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-462000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.575921696s)
functional_test.go:757: restart took 39.57605632s for "functional-462000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-462000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 logs
E0719 11:27:04.395951    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-462000 logs: (2.59402416s)
--- PASS: TestFunctional/serial/LogsCmd (2.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2359055595/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-462000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2359055595/001/logs.txt: (2.820097522s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-462000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-462000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-462000: exit status 115 (281.441466ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:30522 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-462000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-462000 config get cpus: exit status 14 (75.259717ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-462000 config get cpus: exit status 14 (55.31447ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-462000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-462000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2714: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-462000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-462000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (553.003629ms)

                                                
                                                
-- stdout --
	* [functional-462000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:27:49.304154    2663 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:27:49.304345    2663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:27:49.304350    2663 out.go:304] Setting ErrFile to fd 2...
	I0719 11:27:49.304354    2663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:27:49.304531    2663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	I0719 11:27:49.306007    2663 out.go:298] Setting JSON to false
	I0719 11:27:49.328356    2663 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1639,"bootTime":1721412030,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0719 11:27:49.328451    2663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:27:49.350382    2663 out.go:177] * [functional-462000] minikube v1.33.1 on Darwin 14.5
	I0719 11:27:49.392034    2663 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:27:49.392057    2663 notify.go:220] Checking for updates...
	I0719 11:27:49.433915    2663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 11:27:49.454881    2663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 11:27:49.476170    2663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:27:49.497135    2663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	I0719 11:27:49.518116    2663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:27:49.539351    2663 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:27:49.539704    2663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:27:49.539752    2663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:27:49.548799    2663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50681
	I0719 11:27:49.549170    2663 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:27:49.549593    2663 main.go:141] libmachine: Using API Version  1
	I0719 11:27:49.549610    2663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:27:49.550018    2663 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:27:49.550165    2663 main.go:141] libmachine: (functional-462000) Calling .DriverName
	I0719 11:27:49.550379    2663 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:27:49.550648    2663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:27:49.550672    2663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:27:49.559432    2663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50683
	I0719 11:27:49.559785    2663 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:27:49.560126    2663 main.go:141] libmachine: Using API Version  1
	I0719 11:27:49.560143    2663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:27:49.560353    2663 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:27:49.560477    2663 main.go:141] libmachine: (functional-462000) Calling .DriverName
	I0719 11:27:49.589010    2663 out.go:177] * Using the hyperkit driver based on existing profile
	I0719 11:27:49.647074    2663 start.go:297] selected driver: hyperkit
	I0719 11:27:49.647098    2663 start.go:901] validating driver "hyperkit" against &{Name:functional-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:27:49.647299    2663 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:27:49.672003    2663 out.go:177] 
	W0719 11:27:49.708866    2663 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 11:27:49.730101    2663 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-462000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-462000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-462000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (461.481095ms)

                                                
                                                
-- stdout --
	* [functional-462000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:27:48.834996    2656 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:27:48.835155    2656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:27:48.835160    2656 out.go:304] Setting ErrFile to fd 2...
	I0719 11:27:48.835163    2656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:27:48.835366    2656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	I0719 11:27:48.836942    2656 out.go:298] Setting JSON to false
	I0719 11:27:48.859679    2656 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1638,"bootTime":1721412030,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0719 11:27:48.859807    2656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:27:48.882001    2656 out.go:177] * [functional-462000] minikube v1.33.1 sur Darwin 14.5
	I0719 11:27:48.923585    2656 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:27:48.923613    2656 notify.go:220] Checking for updates...
	I0719 11:27:48.966419    2656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	I0719 11:27:48.987657    2656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0719 11:27:49.008579    2656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:27:49.029525    2656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	I0719 11:27:49.050888    2656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:27:49.072840    2656 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:27:49.073237    2656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:27:49.073277    2656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:27:49.082077    2656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50676
	I0719 11:27:49.082463    2656 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:27:49.082915    2656 main.go:141] libmachine: Using API Version  1
	I0719 11:27:49.082927    2656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:27:49.083158    2656 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:27:49.083282    2656 main.go:141] libmachine: (functional-462000) Calling .DriverName
	I0719 11:27:49.083468    2656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:27:49.083727    2656 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:27:49.083752    2656 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:27:49.091996    2656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50678
	I0719 11:27:49.092327    2656 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:27:49.092640    2656 main.go:141] libmachine: Using API Version  1
	I0719 11:27:49.092649    2656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:27:49.092862    2656 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:27:49.092975    2656 main.go:141] libmachine: (functional-462000) Calling .DriverName
	I0719 11:27:49.121360    2656 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0719 11:27:49.142652    2656 start.go:297] selected driver: hyperkit
	I0719 11:27:49.142670    2656 start.go:901] validating driver "hyperkit" against &{Name:functional-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:27:49.142821    2656 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:27:49.166573    2656 out.go:177] 
	W0719 11:27:49.187490    2656 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 11:27:49.208491    2656 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-462000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-462000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-v6m5q" [d4731518-ea3f-489c-97ec-31bc58dbfa61] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-v6m5q" [d4731518-ea3f-489c-97ec-31bc58dbfa61] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.005319956s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.4:30664
functional_test.go:1671: http://192.169.0.4:30664: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-v6m5q

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:30664
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.41s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1c625bc2-99d0-454f-bbe2-cc0ecd865c08] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00257473s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-462000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-462000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-462000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-462000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f09740ae-23c9-4dca-adfc-e936efef088f] Pending
helpers_test.go:344: "sp-pod" [f09740ae-23c9-4dca-adfc-e936efef088f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f09740ae-23c9-4dca-adfc-e936efef088f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004223706s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-462000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-462000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-462000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d9b08135-1c13-473d-b89a-399c864129dc] Pending
E0719 11:27:32.086462    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [d9b08135-1c13-473d-b89a-399c864129dc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d9b08135-1c13-473d-b89a-399c864129dc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003937526s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-462000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh -n functional-462000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 cp functional-462000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd1474102209/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh -n functional-462000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh -n functional-462000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-462000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-rhxgp" [1224eadf-fcfa-4d31-9a51-42601e33b5f3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2024/07/19 11:28:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-64454c8b5c-rhxgp" [1224eadf-fcfa-4d31-9a51-42601e33b5f3] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.004349104s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-462000 exec mysql-64454c8b5c-rhxgp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-462000 exec mysql-64454c8b5c-rhxgp -- mysql -ppassword -e "show databases;": exit status 1 (115.221298ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-462000 exec mysql-64454c8b5c-rhxgp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1592/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "sudo cat /etc/test/nested/copy/1592/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1592.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "sudo cat /etc/ssl/certs/1592.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1592.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "sudo cat /usr/share/ca-certificates/1592.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15922.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "sudo cat /etc/ssl/certs/15922.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15922.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "sudo cat /usr/share/ca-certificates/15922.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-462000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-462000 ssh "sudo systemctl is-active crio": exit status 1 (138.574416ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-462000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-462000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-462000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-462000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2363: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-462000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-462000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5d5e3775-41ce-40f6-bf60-718b2b3919c2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5d5e3775-41ce-40f6-bf60-718b2b3919c2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.002966055s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-462000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.67.26 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-462000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-462000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-462000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-462000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-462000 image ls --format short --alsologtostderr:
I0719 11:28:04.744117    2832 out.go:291] Setting OutFile to fd 1 ...
I0719 11:28:04.744330    2832 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:28:04.744336    2832 out.go:304] Setting ErrFile to fd 2...
I0719 11:28:04.744340    2832 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:28:04.744523    2832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
I0719 11:28:04.745100    2832 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:28:04.745195    2832 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:28:04.745557    2832 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 11:28:04.745602    2832 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 11:28:04.754040    2832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50918
I0719 11:28:04.754501    2832 main.go:141] libmachine: () Calling .GetVersion
I0719 11:28:04.754947    2832 main.go:141] libmachine: Using API Version  1
I0719 11:28:04.754956    2832 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 11:28:04.755162    2832 main.go:141] libmachine: () Calling .GetMachineName
I0719 11:28:04.755260    2832 main.go:141] libmachine: (functional-462000) Calling .GetState
I0719 11:28:04.755355    2832 main.go:141] libmachine: (functional-462000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0719 11:28:04.755433    2832 main.go:141] libmachine: (functional-462000) DBG | hyperkit pid from json: 2125
I0719 11:28:04.756660    2832 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 11:28:04.756681    2832 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 11:28:04.765263    2832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50920
I0719 11:28:04.765649    2832 main.go:141] libmachine: () Calling .GetVersion
I0719 11:28:04.766003    2832 main.go:141] libmachine: Using API Version  1
I0719 11:28:04.766017    2832 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 11:28:04.766228    2832 main.go:141] libmachine: () Calling .GetMachineName
I0719 11:28:04.766330    2832 main.go:141] libmachine: (functional-462000) Calling .DriverName
I0719 11:28:04.766491    2832 ssh_runner.go:195] Run: systemctl --version
I0719 11:28:04.766511    2832 main.go:141] libmachine: (functional-462000) Calling .GetSSHHostname
I0719 11:28:04.766592    2832 main.go:141] libmachine: (functional-462000) Calling .GetSSHPort
I0719 11:28:04.766670    2832 main.go:141] libmachine: (functional-462000) Calling .GetSSHKeyPath
I0719 11:28:04.766773    2832 main.go:141] libmachine: (functional-462000) Calling .GetSSHUsername
I0719 11:28:04.766872    2832 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/functional-462000/id_rsa Username:docker}
I0719 11:28:04.807876    2832 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0719 11:28:04.850151    2832 main.go:141] libmachine: Making call to close driver server
I0719 11:28:04.850165    2832 main.go:141] libmachine: (functional-462000) Calling .Close
I0719 11:28:04.850342    2832 main.go:141] libmachine: Successfully made call to close driver server
I0719 11:28:04.850353    2832 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 11:28:04.850360    2832 main.go:141] libmachine: Making call to close driver server
I0719 11:28:04.850367    2832 main.go:141] libmachine: (functional-462000) Calling .Close
I0719 11:28:04.850371    2832 main.go:141] libmachine: (functional-462000) DBG | Closing plugin on server side
I0719 11:28:04.850510    2832 main.go:141] libmachine: Successfully made call to close driver server
I0719 11:28:04.850519    2832 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 11:28:04.850534    2832 main.go:141] libmachine: (functional-462000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-462000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kicbase/echo-server               | functional-462000 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/library/minikube-local-cache-test | functional-462000 | b82a829150fd2 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-462000 image ls --format table --alsologtostderr:
I0719 11:28:05.316383    2844 out.go:291] Setting OutFile to fd 1 ...
I0719 11:28:05.316669    2844 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:28:05.316675    2844 out.go:304] Setting ErrFile to fd 2...
I0719 11:28:05.316679    2844 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:28:05.316854    2844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
I0719 11:28:05.317426    2844 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:28:05.317524    2844 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:28:05.317915    2844 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 11:28:05.317956    2844 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 11:28:05.326481    2844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50933
I0719 11:28:05.326925    2844 main.go:141] libmachine: () Calling .GetVersion
I0719 11:28:05.327331    2844 main.go:141] libmachine: Using API Version  1
I0719 11:28:05.327360    2844 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 11:28:05.327589    2844 main.go:141] libmachine: () Calling .GetMachineName
I0719 11:28:05.327722    2844 main.go:141] libmachine: (functional-462000) Calling .GetState
I0719 11:28:05.327820    2844 main.go:141] libmachine: (functional-462000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0719 11:28:05.327894    2844 main.go:141] libmachine: (functional-462000) DBG | hyperkit pid from json: 2125
I0719 11:28:05.329118    2844 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 11:28:05.329143    2844 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 11:28:05.337624    2844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50935
I0719 11:28:05.337983    2844 main.go:141] libmachine: () Calling .GetVersion
I0719 11:28:05.338316    2844 main.go:141] libmachine: Using API Version  1
I0719 11:28:05.338326    2844 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 11:28:05.338578    2844 main.go:141] libmachine: () Calling .GetMachineName
I0719 11:28:05.338705    2844 main.go:141] libmachine: (functional-462000) Calling .DriverName
I0719 11:28:05.338872    2844 ssh_runner.go:195] Run: systemctl --version
I0719 11:28:05.338891    2844 main.go:141] libmachine: (functional-462000) Calling .GetSSHHostname
I0719 11:28:05.338979    2844 main.go:141] libmachine: (functional-462000) Calling .GetSSHPort
I0719 11:28:05.339070    2844 main.go:141] libmachine: (functional-462000) Calling .GetSSHKeyPath
I0719 11:28:05.339157    2844 main.go:141] libmachine: (functional-462000) Calling .GetSSHUsername
I0719 11:28:05.339256    2844 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/functional-462000/id_rsa Username:docker}
I0719 11:28:05.379917    2844 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0719 11:28:05.425766    2844 main.go:141] libmachine: Making call to close driver server
I0719 11:28:05.425777    2844 main.go:141] libmachine: (functional-462000) Calling .Close
I0719 11:28:05.425939    2844 main.go:141] libmachine: Successfully made call to close driver server
I0719 11:28:05.425950    2844 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 11:28:05.425956    2844 main.go:141] libmachine: Making call to close driver server
I0719 11:28:05.425960    2844 main.go:141] libmachine: (functional-462000) Calling .Close
I0719 11:28:05.425959    2844 main.go:141] libmachine: (functional-462000) DBG | Closing plugin on server side
I0719 11:28:05.426107    2844 main.go:141] libmachine: Successfully made call to close driver server
I0719 11:28:05.426115    2844 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 11:28:05.426115    2844 main.go:141] libmachine: (functional-462000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-462000 image ls --format json --alsologtostderr:
[{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"b82a829150fd2483b4551e858bfe76d1846039d58efccdf73ee3fb2a77f0261a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-462000"],"size":"30"},{"id":"76932a3b37d7eb138c8f47c9a2
b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io
/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-462000"],"size
":"4940000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-462000 image ls --format json --alsologtostderr:
I0719 11:28:05.111872    2840 out.go:291] Setting OutFile to fd 1 ...
I0719 11:28:05.112170    2840 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:28:05.112176    2840 out.go:304] Setting ErrFile to fd 2...
I0719 11:28:05.112179    2840 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:28:05.112363    2840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
I0719 11:28:05.113011    2840 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:28:05.113112    2840 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:28:05.113465    2840 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 11:28:05.113522    2840 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 11:28:05.122690    2840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50928
I0719 11:28:05.123193    2840 main.go:141] libmachine: () Calling .GetVersion
I0719 11:28:05.123634    2840 main.go:141] libmachine: Using API Version  1
I0719 11:28:05.123646    2840 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 11:28:05.123943    2840 main.go:141] libmachine: () Calling .GetMachineName
I0719 11:28:05.124085    2840 main.go:141] libmachine: (functional-462000) Calling .GetState
I0719 11:28:05.124217    2840 main.go:141] libmachine: (functional-462000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0719 11:28:05.124305    2840 main.go:141] libmachine: (functional-462000) DBG | hyperkit pid from json: 2125
I0719 11:28:05.125613    2840 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 11:28:05.125641    2840 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 11:28:05.134353    2840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50930
I0719 11:28:05.134760    2840 main.go:141] libmachine: () Calling .GetVersion
I0719 11:28:05.135127    2840 main.go:141] libmachine: Using API Version  1
I0719 11:28:05.135140    2840 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 11:28:05.135369    2840 main.go:141] libmachine: () Calling .GetMachineName
I0719 11:28:05.135499    2840 main.go:141] libmachine: (functional-462000) Calling .DriverName
I0719 11:28:05.135669    2840 ssh_runner.go:195] Run: systemctl --version
I0719 11:28:05.135688    2840 main.go:141] libmachine: (functional-462000) Calling .GetSSHHostname
I0719 11:28:05.135778    2840 main.go:141] libmachine: (functional-462000) Calling .GetSSHPort
I0719 11:28:05.135857    2840 main.go:141] libmachine: (functional-462000) Calling .GetSSHKeyPath
I0719 11:28:05.135970    2840 main.go:141] libmachine: (functional-462000) Calling .GetSSHUsername
I0719 11:28:05.136065    2840 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/functional-462000/id_rsa Username:docker}
I0719 11:28:05.176172    2840 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0719 11:28:05.237923    2840 main.go:141] libmachine: Making call to close driver server
I0719 11:28:05.237936    2840 main.go:141] libmachine: (functional-462000) Calling .Close
I0719 11:28:05.238085    2840 main.go:141] libmachine: Successfully made call to close driver server
I0719 11:28:05.238085    2840 main.go:141] libmachine: (functional-462000) DBG | Closing plugin on server side
I0719 11:28:05.238094    2840 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 11:28:05.238100    2840 main.go:141] libmachine: Making call to close driver server
I0719 11:28:05.238105    2840 main.go:141] libmachine: (functional-462000) Calling .Close
I0719 11:28:05.238260    2840 main.go:141] libmachine: Successfully made call to close driver server
I0719 11:28:05.238272    2840 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 11:28:05.238281    2840 main.go:141] libmachine: (functional-462000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-462000 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: b82a829150fd2483b4551e858bfe76d1846039d58efccdf73ee3fb2a77f0261a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-462000
size: "30"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-462000
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-462000 image ls --format yaml --alsologtostderr:
I0719 11:28:04.929772    2836 out.go:291] Setting OutFile to fd 1 ...
I0719 11:28:04.930039    2836 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:28:04.930046    2836 out.go:304] Setting ErrFile to fd 2...
I0719 11:28:04.930049    2836 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:28:04.930235    2836 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
I0719 11:28:04.930923    2836 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:28:04.931019    2836 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:28:04.931370    2836 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 11:28:04.931414    2836 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 11:28:04.939667    2836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50923
I0719 11:28:04.940116    2836 main.go:141] libmachine: () Calling .GetVersion
I0719 11:28:04.940528    2836 main.go:141] libmachine: Using API Version  1
I0719 11:28:04.940557    2836 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 11:28:04.940807    2836 main.go:141] libmachine: () Calling .GetMachineName
I0719 11:28:04.940930    2836 main.go:141] libmachine: (functional-462000) Calling .GetState
I0719 11:28:04.941023    2836 main.go:141] libmachine: (functional-462000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0719 11:28:04.941096    2836 main.go:141] libmachine: (functional-462000) DBG | hyperkit pid from json: 2125
I0719 11:28:04.942371    2836 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 11:28:04.942394    2836 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 11:28:04.950833    2836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50925
I0719 11:28:04.951179    2836 main.go:141] libmachine: () Calling .GetVersion
I0719 11:28:04.951536    2836 main.go:141] libmachine: Using API Version  1
I0719 11:28:04.951548    2836 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 11:28:04.951796    2836 main.go:141] libmachine: () Calling .GetMachineName
I0719 11:28:04.951925    2836 main.go:141] libmachine: (functional-462000) Calling .DriverName
I0719 11:28:04.952110    2836 ssh_runner.go:195] Run: systemctl --version
I0719 11:28:04.952128    2836 main.go:141] libmachine: (functional-462000) Calling .GetSSHHostname
I0719 11:28:04.952205    2836 main.go:141] libmachine: (functional-462000) Calling .GetSSHPort
I0719 11:28:04.952279    2836 main.go:141] libmachine: (functional-462000) Calling .GetSSHKeyPath
I0719 11:28:04.952366    2836 main.go:141] libmachine: (functional-462000) Calling .GetSSHUsername
I0719 11:28:04.952458    2836 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/functional-462000/id_rsa Username:docker}
I0719 11:28:04.992484    2836 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0719 11:28:05.030873    2836 main.go:141] libmachine: Making call to close driver server
I0719 11:28:05.030883    2836 main.go:141] libmachine: (functional-462000) Calling .Close
I0719 11:28:05.031030    2836 main.go:141] libmachine: (functional-462000) DBG | Closing plugin on server side
I0719 11:28:05.031114    2836 main.go:141] libmachine: Successfully made call to close driver server
I0719 11:28:05.031143    2836 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 11:28:05.031159    2836 main.go:141] libmachine: Making call to close driver server
I0719 11:28:05.031169    2836 main.go:141] libmachine: (functional-462000) Calling .Close
I0719 11:28:05.031318    2836 main.go:141] libmachine: (functional-462000) DBG | Closing plugin on server side
I0719 11:28:05.031365    2836 main.go:141] libmachine: Successfully made call to close driver server
I0719 11:28:05.031377    2836 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-462000 ssh pgrep buildkitd: exit status 1 (133.245629ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image build -t localhost/my-image:functional-462000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-462000 image build -t localhost/my-image:functional-462000 testdata/build --alsologtostderr: (1.804072597s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-462000 image build -t localhost/my-image:functional-462000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 957e29b6e5a8
---> Removed intermediate container 957e29b6e5a8
---> d090d3a723a3
Step 3/3 : ADD content.txt /
---> 52c35c7dc09a
Successfully built 52c35c7dc09a
Successfully tagged localhost/my-image:functional-462000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-462000 image build -t localhost/my-image:functional-462000 testdata/build --alsologtostderr:
I0719 11:28:05.657180    2853 out.go:291] Setting OutFile to fd 1 ...
I0719 11:28:05.657531    2853 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:28:05.657537    2853 out.go:304] Setting ErrFile to fd 2...
I0719 11:28:05.657540    2853 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:28:05.657713    2853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
I0719 11:28:05.658301    2853 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:28:05.659479    2853 config.go:182] Loaded profile config "functional-462000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:28:05.659826    2853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 11:28:05.659865    2853 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 11:28:05.668221    2853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50945
I0719 11:28:05.668632    2853 main.go:141] libmachine: () Calling .GetVersion
I0719 11:28:05.669041    2853 main.go:141] libmachine: Using API Version  1
I0719 11:28:05.669051    2853 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 11:28:05.669248    2853 main.go:141] libmachine: () Calling .GetMachineName
I0719 11:28:05.669373    2853 main.go:141] libmachine: (functional-462000) Calling .GetState
I0719 11:28:05.669464    2853 main.go:141] libmachine: (functional-462000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0719 11:28:05.669544    2853 main.go:141] libmachine: (functional-462000) DBG | hyperkit pid from json: 2125
I0719 11:28:05.670782    2853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0719 11:28:05.670812    2853 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0719 11:28:05.679274    2853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50947
I0719 11:28:05.679658    2853 main.go:141] libmachine: () Calling .GetVersion
I0719 11:28:05.680040    2853 main.go:141] libmachine: Using API Version  1
I0719 11:28:05.680058    2853 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 11:28:05.680284    2853 main.go:141] libmachine: () Calling .GetMachineName
I0719 11:28:05.680400    2853 main.go:141] libmachine: (functional-462000) Calling .DriverName
I0719 11:28:05.680560    2853 ssh_runner.go:195] Run: systemctl --version
I0719 11:28:05.680580    2853 main.go:141] libmachine: (functional-462000) Calling .GetSSHHostname
I0719 11:28:05.680664    2853 main.go:141] libmachine: (functional-462000) Calling .GetSSHPort
I0719 11:28:05.680744    2853 main.go:141] libmachine: (functional-462000) Calling .GetSSHKeyPath
I0719 11:28:05.680818    2853 main.go:141] libmachine: (functional-462000) Calling .GetSSHUsername
I0719 11:28:05.680909    2853 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/functional-462000/id_rsa Username:docker}
I0719 11:28:05.722968    2853 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.174039201.tar
I0719 11:28:05.723063    2853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0719 11:28:05.737342    2853 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.174039201.tar
I0719 11:28:05.742836    2853 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.174039201.tar: stat -c "%s %y" /var/lib/minikube/build/build.174039201.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.174039201.tar': No such file or directory
I0719 11:28:05.742873    2853 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.174039201.tar --> /var/lib/minikube/build/build.174039201.tar (3072 bytes)
I0719 11:28:05.775054    2853 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.174039201
I0719 11:28:05.783127    2853 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.174039201 -xf /var/lib/minikube/build/build.174039201.tar
I0719 11:28:05.791318    2853 docker.go:360] Building image: /var/lib/minikube/build/build.174039201
I0719 11:28:05.791396    2853 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-462000 /var/lib/minikube/build/build.174039201
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0719 11:28:07.360225    2853 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-462000 /var/lib/minikube/build/build.174039201: (1.568845533s)
I0719 11:28:07.360297    2853 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.174039201
I0719 11:28:07.369544    2853 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.174039201.tar
I0719 11:28:07.381444    2853 build_images.go:217] Built localhost/my-image:functional-462000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.174039201.tar
I0719 11:28:07.381469    2853 build_images.go:133] succeeded building to: functional-462000
I0719 11:28:07.381487    2853 build_images.go:134] failed building to: 
I0719 11:28:07.381506    2853 main.go:141] libmachine: Making call to close driver server
I0719 11:28:07.381514    2853 main.go:141] libmachine: (functional-462000) Calling .Close
I0719 11:28:07.381664    2853 main.go:141] libmachine: (functional-462000) DBG | Closing plugin on server side
I0719 11:28:07.381672    2853 main.go:141] libmachine: Successfully made call to close driver server
I0719 11:28:07.381680    2853 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 11:28:07.381686    2853 main.go:141] libmachine: Making call to close driver server
I0719 11:28:07.381692    2853 main.go:141] libmachine: (functional-462000) Calling .Close
I0719 11:28:07.381870    2853 main.go:141] libmachine: (functional-462000) DBG | Closing plugin on server side
I0719 11:28:07.381872    2853 main.go:141] libmachine: Successfully made call to close driver server
I0719 11:28:07.381882    2853 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.714556051s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-462000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-462000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-462000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-5xgnl" [f7f0bccd-d7be-47e1-97c7-4d867e6148b6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-5xgnl" [f7f0bccd-d7be-47e1-97c7-4d867e6148b6] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003897602s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image load --daemon docker.io/kicbase/echo-server:functional-462000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image load --daemon docker.io/kicbase/echo-server:functional-462000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-462000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image load --daemon docker.io/kicbase/echo-server:functional-462000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image save docker.io/kicbase/echo-server:functional-462000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image rm docker.io/kicbase/echo-server:functional-462000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-462000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 image save --daemon docker.io/kicbase/echo-server:functional-462000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-462000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "178.142301ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "78.347419ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "175.956055ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "76.239235ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port609048723/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721413664849008000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port609048723/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721413664849008000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port609048723/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721413664849008000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port609048723/001/test-1721413664849008000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (152.194526ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 18:27 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 18:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 18:27 test-1721413664849008000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh cat /mount-9p/test-1721413664849008000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-462000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [29ee9981-e22d-42bf-b3b2-358ff777f523] Pending
helpers_test.go:344: "busybox-mount" [29ee9981-e22d-42bf-b3b2-358ff777f523] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [29ee9981-e22d-42bf-b3b2-358ff777f523] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [29ee9981-e22d-42bf-b3b2-358ff777f523] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002695891s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-462000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port609048723/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 service list -o json
functional_test.go:1490: Took "371.217029ms" to run "out/minikube-darwin-amd64 -p functional-462000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.4:32281
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.4:32281
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2234529542/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.137014ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2234529542/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-462000 ssh "sudo umount -f /mount-9p": exit status 1 (129.157986ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-462000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2234529542/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1035086119/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1035086119/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1035086119/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T" /mount1: exit status 1 (158.246873ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T" /mount1: exit status 1 (174.36303ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-462000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1035086119/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1035086119/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-462000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1035086119/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-462000 docker-env) && out/minikube-darwin-amd64 status -p functional-462000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-462000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-462000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-462000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-462000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-462000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (435.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-820000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0719 11:32:04.390077    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:32:12.110534    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:12.116215    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:12.126524    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:12.147879    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:12.187966    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:12.269342    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:12.430564    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:12.750676    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:13.392340    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:14.672616    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:17.233262    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:22.354428    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:32.595929    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:32:53.076495    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:33:34.036480    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:34:55.955672    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-820000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (7m15.09115002s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (435.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-820000 -- rollout status deployment/busybox: (2.687631551s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-m4sxj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-r4qzb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-x8z9l -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-m4sxj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-r4qzb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-x8z9l -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-m4sxj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-r4qzb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-x8z9l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-m4sxj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-m4sxj -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-r4qzb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-r4qzb -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-x8z9l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-820000 -- exec busybox-fc5497c4f-x8z9l -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-820000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-820000 -v=7 --alsologtostderr: (56.372392346s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-820000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp testdata/cp-test.txt ha-820000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile449767761/001/cp-test_ha-820000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000:/home/docker/cp-test.txt ha-820000-m02:/home/docker/cp-test_ha-820000_ha-820000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m02 "sudo cat /home/docker/cp-test_ha-820000_ha-820000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000:/home/docker/cp-test.txt ha-820000-m03:/home/docker/cp-test_ha-820000_ha-820000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m03 "sudo cat /home/docker/cp-test_ha-820000_ha-820000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000:/home/docker/cp-test.txt ha-820000-m04:/home/docker/cp-test_ha-820000_ha-820000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m04 "sudo cat /home/docker/cp-test_ha-820000_ha-820000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp testdata/cp-test.txt ha-820000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile449767761/001/cp-test_ha-820000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m02:/home/docker/cp-test.txt ha-820000:/home/docker/cp-test_ha-820000-m02_ha-820000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000 "sudo cat /home/docker/cp-test_ha-820000-m02_ha-820000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m02:/home/docker/cp-test.txt ha-820000-m03:/home/docker/cp-test_ha-820000-m02_ha-820000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m03 "sudo cat /home/docker/cp-test_ha-820000-m02_ha-820000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m02:/home/docker/cp-test.txt ha-820000-m04:/home/docker/cp-test_ha-820000-m02_ha-820000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m04 "sudo cat /home/docker/cp-test_ha-820000-m02_ha-820000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp testdata/cp-test.txt ha-820000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile449767761/001/cp-test_ha-820000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m03:/home/docker/cp-test.txt ha-820000:/home/docker/cp-test_ha-820000-m03_ha-820000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000 "sudo cat /home/docker/cp-test_ha-820000-m03_ha-820000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m03:/home/docker/cp-test.txt ha-820000-m02:/home/docker/cp-test_ha-820000-m03_ha-820000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m02 "sudo cat /home/docker/cp-test_ha-820000-m03_ha-820000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m03:/home/docker/cp-test.txt ha-820000-m04:/home/docker/cp-test_ha-820000-m03_ha-820000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m04 "sudo cat /home/docker/cp-test_ha-820000-m03_ha-820000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp testdata/cp-test.txt ha-820000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile449767761/001/cp-test_ha-820000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m04:/home/docker/cp-test.txt ha-820000:/home/docker/cp-test_ha-820000-m04_ha-820000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000 "sudo cat /home/docker/cp-test_ha-820000-m04_ha-820000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m04:/home/docker/cp-test.txt ha-820000-m02:/home/docker/cp-test_ha-820000-m04_ha-820000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m02 "sudo cat /home/docker/cp-test_ha-820000-m04_ha-820000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 cp ha-820000-m04:/home/docker/cp-test.txt ha-820000-m03:/home/docker/cp-test_ha-820000-m04_ha-820000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 ssh -n ha-820000-m03 "sudo cat /home/docker/cp-test_ha-820000-m04_ha-820000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 node stop m02 -v=7 --alsologtostderr
E0719 11:37:04.384543    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-820000 node stop m02 -v=7 --alsologtostderr: (8.410408049s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-820000 status -v=7 --alsologtostderr: exit status 7 (357.506039ms)

                                                
                                                
-- stdout --
	ha-820000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-820000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-820000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:37:05.282859    3626 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:37:05.283048    3626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:37:05.283053    3626 out.go:304] Setting ErrFile to fd 2...
	I0719 11:37:05.283057    3626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:37:05.283230    3626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	I0719 11:37:05.283399    3626 out.go:298] Setting JSON to false
	I0719 11:37:05.283423    3626 mustload.go:65] Loading cluster: ha-820000
	I0719 11:37:05.283466    3626 notify.go:220] Checking for updates...
	I0719 11:37:05.283741    3626 config.go:182] Loaded profile config "ha-820000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:37:05.283758    3626 status.go:255] checking status of ha-820000 ...
	I0719 11:37:05.284141    3626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:37:05.284203    3626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:37:05.295816    3626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51702
	I0719 11:37:05.296182    3626 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:37:05.296567    3626 main.go:141] libmachine: Using API Version  1
	I0719 11:37:05.296598    3626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:37:05.296836    3626 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:37:05.296940    3626 main.go:141] libmachine: (ha-820000) Calling .GetState
	I0719 11:37:05.297013    3626 main.go:141] libmachine: (ha-820000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 11:37:05.297096    3626 main.go:141] libmachine: (ha-820000) DBG | hyperkit pid from json: 2900
	I0719 11:37:05.298086    3626 status.go:330] ha-820000 host status = "Running" (err=<nil>)
	I0719 11:37:05.298105    3626 host.go:66] Checking if "ha-820000" exists ...
	I0719 11:37:05.298353    3626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:37:05.298377    3626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:37:05.306820    3626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51704
	I0719 11:37:05.307176    3626 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:37:05.307508    3626 main.go:141] libmachine: Using API Version  1
	I0719 11:37:05.307523    3626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:37:05.307753    3626 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:37:05.307890    3626 main.go:141] libmachine: (ha-820000) Calling .GetIP
	I0719 11:37:05.314134    3626 host.go:66] Checking if "ha-820000" exists ...
	I0719 11:37:05.314380    3626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:37:05.314412    3626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:37:05.322788    3626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51706
	I0719 11:37:05.323140    3626 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:37:05.323462    3626 main.go:141] libmachine: Using API Version  1
	I0719 11:37:05.323481    3626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:37:05.323689    3626 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:37:05.323803    3626 main.go:141] libmachine: (ha-820000) Calling .DriverName
	I0719 11:37:05.323966    3626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:37:05.323986    3626 main.go:141] libmachine: (ha-820000) Calling .GetSSHHostname
	I0719 11:37:05.324073    3626 main.go:141] libmachine: (ha-820000) Calling .GetSSHPort
	I0719 11:37:05.324164    3626 main.go:141] libmachine: (ha-820000) Calling .GetSSHKeyPath
	I0719 11:37:05.324250    3626 main.go:141] libmachine: (ha-820000) Calling .GetSSHUsername
	I0719 11:37:05.324331    3626 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/ha-820000/id_rsa Username:docker}
	I0719 11:37:05.356464    3626 ssh_runner.go:195] Run: systemctl --version
	I0719 11:37:05.360828    3626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 11:37:05.371308    3626 kubeconfig.go:125] found "ha-820000" server: "https://192.169.0.254:8443"
	I0719 11:37:05.371333    3626 api_server.go:166] Checking apiserver status ...
	I0719 11:37:05.371371    3626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:37:05.384470    3626 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2016/cgroup
	W0719 11:37:05.392424    3626 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2016/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 11:37:05.392469    3626 ssh_runner.go:195] Run: ls
	I0719 11:37:05.395769    3626 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0719 11:37:05.399940    3626 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0719 11:37:05.399951    3626 status.go:422] ha-820000 apiserver status = Running (err=<nil>)
	I0719 11:37:05.399961    3626 status.go:257] ha-820000 status: &{Name:ha-820000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:37:05.399972    3626 status.go:255] checking status of ha-820000-m02 ...
	I0719 11:37:05.400231    3626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:37:05.400260    3626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:37:05.408930    3626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51710
	I0719 11:37:05.409298    3626 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:37:05.409638    3626 main.go:141] libmachine: Using API Version  1
	I0719 11:37:05.409658    3626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:37:05.409888    3626 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:37:05.410010    3626 main.go:141] libmachine: (ha-820000-m02) Calling .GetState
	I0719 11:37:05.410099    3626 main.go:141] libmachine: (ha-820000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 11:37:05.410176    3626 main.go:141] libmachine: (ha-820000-m02) DBG | hyperkit pid from json: 3189
	I0719 11:37:05.411123    3626 main.go:141] libmachine: (ha-820000-m02) DBG | hyperkit pid 3189 missing from process table
	I0719 11:37:05.411142    3626 status.go:330] ha-820000-m02 host status = "Stopped" (err=<nil>)
	I0719 11:37:05.411150    3626 status.go:343] host is not running, skipping remaining checks
	I0719 11:37:05.411157    3626 status.go:257] ha-820000-m02 status: &{Name:ha-820000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:37:05.411174    3626 status.go:255] checking status of ha-820000-m03 ...
	I0719 11:37:05.411440    3626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:37:05.411466    3626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:37:05.420059    3626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51712
	I0719 11:37:05.420436    3626 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:37:05.420740    3626 main.go:141] libmachine: Using API Version  1
	I0719 11:37:05.420748    3626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:37:05.420935    3626 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:37:05.421040    3626 main.go:141] libmachine: (ha-820000-m03) Calling .GetState
	I0719 11:37:05.421120    3626 main.go:141] libmachine: (ha-820000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 11:37:05.421206    3626 main.go:141] libmachine: (ha-820000-m03) DBG | hyperkit pid from json: 3200
	I0719 11:37:05.422190    3626 status.go:330] ha-820000-m03 host status = "Running" (err=<nil>)
	I0719 11:37:05.422200    3626 host.go:66] Checking if "ha-820000-m03" exists ...
	I0719 11:37:05.422457    3626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:37:05.422478    3626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:37:05.431052    3626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51714
	I0719 11:37:05.431397    3626 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:37:05.431733    3626 main.go:141] libmachine: Using API Version  1
	I0719 11:37:05.431748    3626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:37:05.431939    3626 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:37:05.432047    3626 main.go:141] libmachine: (ha-820000-m03) Calling .GetIP
	I0719 11:37:05.432133    3626 host.go:66] Checking if "ha-820000-m03" exists ...
	I0719 11:37:05.432392    3626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:37:05.432422    3626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:37:05.440870    3626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51716
	I0719 11:37:05.441231    3626 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:37:05.441562    3626 main.go:141] libmachine: Using API Version  1
	I0719 11:37:05.441581    3626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:37:05.441809    3626 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:37:05.441918    3626 main.go:141] libmachine: (ha-820000-m03) Calling .DriverName
	I0719 11:37:05.442050    3626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:37:05.442061    3626 main.go:141] libmachine: (ha-820000-m03) Calling .GetSSHHostname
	I0719 11:37:05.442145    3626 main.go:141] libmachine: (ha-820000-m03) Calling .GetSSHPort
	I0719 11:37:05.442233    3626 main.go:141] libmachine: (ha-820000-m03) Calling .GetSSHKeyPath
	I0719 11:37:05.442326    3626 main.go:141] libmachine: (ha-820000-m03) Calling .GetSSHUsername
	I0719 11:37:05.442400    3626 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/ha-820000-m03/id_rsa Username:docker}
	I0719 11:37:05.474677    3626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 11:37:05.486500    3626 kubeconfig.go:125] found "ha-820000" server: "https://192.169.0.254:8443"
	I0719 11:37:05.486514    3626 api_server.go:166] Checking apiserver status ...
	I0719 11:37:05.486549    3626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:37:05.498588    3626 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2000/cgroup
	W0719 11:37:05.507023    3626 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2000/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 11:37:05.507082    3626 ssh_runner.go:195] Run: ls
	I0719 11:37:05.510261    3626 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0719 11:37:05.513323    3626 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0719 11:37:05.513334    3626 status.go:422] ha-820000-m03 apiserver status = Running (err=<nil>)
	I0719 11:37:05.513342    3626 status.go:257] ha-820000-m03 status: &{Name:ha-820000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:37:05.513352    3626 status.go:255] checking status of ha-820000-m04 ...
	I0719 11:37:05.513625    3626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:37:05.513645    3626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:37:05.522128    3626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51720
	I0719 11:37:05.522504    3626 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:37:05.522889    3626 main.go:141] libmachine: Using API Version  1
	I0719 11:37:05.522910    3626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:37:05.523134    3626 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:37:05.523266    3626 main.go:141] libmachine: (ha-820000-m04) Calling .GetState
	I0719 11:37:05.523353    3626 main.go:141] libmachine: (ha-820000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 11:37:05.523458    3626 main.go:141] libmachine: (ha-820000-m04) DBG | hyperkit pid from json: 3305
	I0719 11:37:05.524434    3626 status.go:330] ha-820000-m04 host status = "Running" (err=<nil>)
	I0719 11:37:05.524442    3626 host.go:66] Checking if "ha-820000-m04" exists ...
	I0719 11:37:05.524690    3626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:37:05.524723    3626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:37:05.533140    3626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51722
	I0719 11:37:05.533488    3626 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:37:05.533793    3626 main.go:141] libmachine: Using API Version  1
	I0719 11:37:05.533803    3626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:37:05.534035    3626 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:37:05.534142    3626 main.go:141] libmachine: (ha-820000-m04) Calling .GetIP
	I0719 11:37:05.534231    3626 host.go:66] Checking if "ha-820000-m04" exists ...
	I0719 11:37:05.534471    3626 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:37:05.534502    3626 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:37:05.542816    3626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51724
	I0719 11:37:05.543181    3626 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:37:05.543519    3626 main.go:141] libmachine: Using API Version  1
	I0719 11:37:05.543532    3626 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:37:05.543721    3626 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:37:05.543830    3626 main.go:141] libmachine: (ha-820000-m04) Calling .DriverName
	I0719 11:37:05.543953    3626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:37:05.543972    3626 main.go:141] libmachine: (ha-820000-m04) Calling .GetSSHHostname
	I0719 11:37:05.544048    3626 main.go:141] libmachine: (ha-820000-m04) Calling .GetSSHPort
	I0719 11:37:05.544130    3626 main.go:141] libmachine: (ha-820000-m04) Calling .GetSSHKeyPath
	I0719 11:37:05.544208    3626 main.go:141] libmachine: (ha-820000-m04) Calling .GetSSHUsername
	I0719 11:37:05.544286    3626 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/ha-820000-m04/id_rsa Username:docker}
	I0719 11:37:05.575322    3626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 11:37:05.585824    3626 status.go:257] ha-820000-m04 status: &{Name:ha-820000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (156.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 node start m02 -v=7 --alsologtostderr
E0719 11:37:12.104578    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:37:39.793212    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 11:38:27.436290    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-820000 node start m02 -v=7 --alsologtostderr: (2m36.329541707s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (156.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (209.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-820000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-820000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-820000 -v=7 --alsologtostderr: (27.124026401s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-820000 --wait=true -v=7 --alsologtostderr
E0719 11:42:04.378857    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:42:12.098283    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-820000 --wait=true -v=7 --alsologtostderr: (3m2.123886302s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-820000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (209.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-820000 node delete m03 -v=7 --alsologtostderr: (7.627415451s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-820000 stop -v=7 --alsologtostderr: (24.834710383s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-820000 status -v=7 --alsologtostderr: exit status 7 (88.741526ms)

                                                
                                                
-- stdout --
	ha-820000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:43:45.562370    3809 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:43:45.562645    3809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:43:45.562650    3809 out.go:304] Setting ErrFile to fd 2...
	I0719 11:43:45.562654    3809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:43:45.562819    3809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	I0719 11:43:45.563011    3809 out.go:298] Setting JSON to false
	I0719 11:43:45.563031    3809 mustload.go:65] Loading cluster: ha-820000
	I0719 11:43:45.563071    3809 notify.go:220] Checking for updates...
	I0719 11:43:45.563323    3809 config.go:182] Loaded profile config "ha-820000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:43:45.563342    3809 status.go:255] checking status of ha-820000 ...
	I0719 11:43:45.563691    3809 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:43:45.563728    3809 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:43:45.572482    3809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52031
	I0719 11:43:45.572843    3809 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:43:45.573310    3809 main.go:141] libmachine: Using API Version  1
	I0719 11:43:45.573331    3809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:43:45.573557    3809 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:43:45.573671    3809 main.go:141] libmachine: (ha-820000) Calling .GetState
	I0719 11:43:45.573761    3809 main.go:141] libmachine: (ha-820000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 11:43:45.573829    3809 main.go:141] libmachine: (ha-820000) DBG | hyperkit pid from json: 3718
	I0719 11:43:45.574747    3809 main.go:141] libmachine: (ha-820000) DBG | hyperkit pid 3718 missing from process table
	I0719 11:43:45.574773    3809 status.go:330] ha-820000 host status = "Stopped" (err=<nil>)
	I0719 11:43:45.574780    3809 status.go:343] host is not running, skipping remaining checks
	I0719 11:43:45.574787    3809 status.go:257] ha-820000 status: &{Name:ha-820000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:43:45.574809    3809 status.go:255] checking status of ha-820000-m02 ...
	I0719 11:43:45.575052    3809 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:43:45.575074    3809 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:43:45.583314    3809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52033
	I0719 11:43:45.583674    3809 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:43:45.583990    3809 main.go:141] libmachine: Using API Version  1
	I0719 11:43:45.584006    3809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:43:45.584204    3809 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:43:45.584327    3809 main.go:141] libmachine: (ha-820000-m02) Calling .GetState
	I0719 11:43:45.584419    3809 main.go:141] libmachine: (ha-820000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 11:43:45.584497    3809 main.go:141] libmachine: (ha-820000-m02) DBG | hyperkit pid from json: 3736
	I0719 11:43:45.585399    3809 main.go:141] libmachine: (ha-820000-m02) DBG | hyperkit pid 3736 missing from process table
	I0719 11:43:45.585411    3809 status.go:330] ha-820000-m02 host status = "Stopped" (err=<nil>)
	I0719 11:43:45.585419    3809 status.go:343] host is not running, skipping remaining checks
	I0719 11:43:45.585426    3809 status.go:257] ha-820000-m02 status: &{Name:ha-820000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:43:45.585435    3809 status.go:255] checking status of ha-820000-m04 ...
	I0719 11:43:45.585668    3809 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:43:45.585691    3809 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:43:45.593966    3809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52035
	I0719 11:43:45.594305    3809 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:43:45.594600    3809 main.go:141] libmachine: Using API Version  1
	I0719 11:43:45.594615    3809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:43:45.594839    3809 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:43:45.594951    3809 main.go:141] libmachine: (ha-820000-m04) Calling .GetState
	I0719 11:43:45.595031    3809 main.go:141] libmachine: (ha-820000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 11:43:45.595100    3809 main.go:141] libmachine: (ha-820000-m04) DBG | hyperkit pid from json: 3750
	I0719 11:43:45.596028    3809 status.go:330] ha-820000-m04 host status = "Stopped" (err=<nil>)
	I0719 11:43:45.596033    3809 main.go:141] libmachine: (ha-820000-m04) DBG | hyperkit pid 3750 missing from process table
	I0719 11:43:45.596035    3809 status.go:343] host is not running, skipping remaining checks
	I0719 11:43:45.596044    3809 status.go:257] ha-820000-m04 status: &{Name:ha-820000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (243.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-820000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0719 11:47:04.446383    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:47:12.165417    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-820000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (4m2.926376095s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (243.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-820000 --control-plane -v=7 --alsologtostderr
E0719 11:48:35.215216    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-820000 --control-plane -v=7 --alsologtostderr: (1m16.966295676s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-820000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.33s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (38.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-159000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-159000 --driver=hyperkit : (38.620145639s)
--- PASS: TestImageBuild/serial/Setup (38.62s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-159000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-159000: (1.21954851s)
--- PASS: TestImageBuild/serial/NormalBuild (1.22s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-159000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-159000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-159000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.33s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-147000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-147000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (52.329525599s)
--- PASS: TestJSONOutput/start/Command (52.33s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-147000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-147000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-147000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-147000 --output=json --user=testUser: (8.31896368s)
--- PASS: TestJSONOutput/stop/Command (8.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-358000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-358000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (361.366242ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"25c61fbf-1937-48a5-927c-09e5100ab6c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-358000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f8a157a-92ad-439d-9a69-5a9a4ed53a3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19307"}}
	{"specversion":"1.0","id":"9ef675bd-d594-4c6f-842f-c005cc6e37f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig"}}
	{"specversion":"1.0","id":"6eeb470a-c8da-4374-91cd-eff15e5ada46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"d8c51f8e-24e0-457b-8377-caed633e7454","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"13c900a0-bf43-492f-b2b5-6e443ad43a6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube"}}
	{"specversion":"1.0","id":"b105a555-2730-4207-88e1-e08db04d345c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"24ffc01c-4f28-4705-9a8d-20413572efae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-358000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-358000
--- PASS: TestErrorJSONOutput (0.58s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (91.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-893000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-893000 --driver=hyperkit : (39.381792257s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-895000 --driver=hyperkit 
E0719 11:52:04.444252    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:52:12.164397    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-895000 --driver=hyperkit : (40.758752208s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-893000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-895000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-895000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-895000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-895000: (5.24037321s)
helpers_test.go:175: Cleaning up "first-893000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-893000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-893000: (5.278408971s)
--- PASS: TestMinikubeProfile (91.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-099000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-099000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (20.21289419s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-099000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-099000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-871000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0719 11:55:07.499569    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-871000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (2m5.61833443s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-871000 -- rollout status deployment/busybox: (2.522713854s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- exec busybox-fc5497c4f-4vlzm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- exec busybox-fc5497c4f-t7lpn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- exec busybox-fc5497c4f-4vlzm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- exec busybox-fc5497c4f-t7lpn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- exec busybox-fc5497c4f-4vlzm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- exec busybox-fc5497c4f-t7lpn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- exec busybox-fc5497c4f-4vlzm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- exec busybox-fc5497c4f-4vlzm -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- exec busybox-fc5497c4f-t7lpn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-871000 -- exec busybox-fc5497c4f-t7lpn -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-871000 -v 3 --alsologtostderr
E0719 11:57:04.444194    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 11:57:12.163275    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-871000 -v 3 --alsologtostderr: (44.495469346s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.81s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-871000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp testdata/cp-test.txt multinode-871000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp multinode-871000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1594904202/001/cp-test_multinode-871000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp multinode-871000:/home/docker/cp-test.txt multinode-871000-m02:/home/docker/cp-test_multinode-871000_multinode-871000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m02 "sudo cat /home/docker/cp-test_multinode-871000_multinode-871000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp multinode-871000:/home/docker/cp-test.txt multinode-871000-m03:/home/docker/cp-test_multinode-871000_multinode-871000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m03 "sudo cat /home/docker/cp-test_multinode-871000_multinode-871000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp testdata/cp-test.txt multinode-871000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp multinode-871000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1594904202/001/cp-test_multinode-871000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp multinode-871000-m02:/home/docker/cp-test.txt multinode-871000:/home/docker/cp-test_multinode-871000-m02_multinode-871000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000 "sudo cat /home/docker/cp-test_multinode-871000-m02_multinode-871000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp multinode-871000-m02:/home/docker/cp-test.txt multinode-871000-m03:/home/docker/cp-test_multinode-871000-m02_multinode-871000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m03 "sudo cat /home/docker/cp-test_multinode-871000-m02_multinode-871000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp testdata/cp-test.txt multinode-871000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp multinode-871000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1594904202/001/cp-test_multinode-871000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp multinode-871000-m03:/home/docker/cp-test.txt multinode-871000:/home/docker/cp-test_multinode-871000-m03_multinode-871000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000 "sudo cat /home/docker/cp-test_multinode-871000-m03_multinode-871000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 cp multinode-871000-m03:/home/docker/cp-test.txt multinode-871000-m02:/home/docker/cp-test_multinode-871000-m03_multinode-871000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 ssh -n multinode-871000-m02 "sudo cat /home/docker/cp-test_multinode-871000-m03_multinode-871000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-871000 node stop m03: (2.33258333s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-871000 status: exit status 7 (243.421312ms)

                                                
                                                
-- stdout --
	multinode-871000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-871000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-871000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-871000 status --alsologtostderr: exit status 7 (242.097823ms)

                                                
                                                
-- stdout --
	multinode-871000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-871000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-871000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:57:37.175795    4498 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:57:37.176391    4498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:57:37.176401    4498 out.go:304] Setting ErrFile to fd 2...
	I0719 11:57:37.176408    4498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:57:37.176961    4498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	I0719 11:57:37.177162    4498 out.go:298] Setting JSON to false
	I0719 11:57:37.177189    4498 mustload.go:65] Loading cluster: multinode-871000
	I0719 11:57:37.177232    4498 notify.go:220] Checking for updates...
	I0719 11:57:37.177494    4498 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:57:37.177509    4498 status.go:255] checking status of multinode-871000 ...
	I0719 11:57:37.177886    4498 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:57:37.177925    4498 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:57:37.187035    4498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53063
	I0719 11:57:37.187476    4498 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:57:37.187867    4498 main.go:141] libmachine: Using API Version  1
	I0719 11:57:37.187876    4498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:57:37.188074    4498 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:57:37.188180    4498 main.go:141] libmachine: (multinode-871000) Calling .GetState
	I0719 11:57:37.188268    4498 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 11:57:37.188345    4498 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid from json: 4202
	I0719 11:57:37.189507    4498 status.go:330] multinode-871000 host status = "Running" (err=<nil>)
	I0719 11:57:37.189525    4498 host.go:66] Checking if "multinode-871000" exists ...
	I0719 11:57:37.189756    4498 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:57:37.189776    4498 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:57:37.198464    4498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53065
	I0719 11:57:37.198825    4498 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:57:37.199154    4498 main.go:141] libmachine: Using API Version  1
	I0719 11:57:37.199170    4498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:57:37.199412    4498 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:57:37.199525    4498 main.go:141] libmachine: (multinode-871000) Calling .GetIP
	I0719 11:57:37.199609    4498 host.go:66] Checking if "multinode-871000" exists ...
	I0719 11:57:37.199872    4498 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:57:37.199897    4498 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:57:37.208359    4498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53067
	I0719 11:57:37.208683    4498 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:57:37.208983    4498 main.go:141] libmachine: Using API Version  1
	I0719 11:57:37.208991    4498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:57:37.209184    4498 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:57:37.209296    4498 main.go:141] libmachine: (multinode-871000) Calling .DriverName
	I0719 11:57:37.209442    4498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:57:37.209465    4498 main.go:141] libmachine: (multinode-871000) Calling .GetSSHHostname
	I0719 11:57:37.209539    4498 main.go:141] libmachine: (multinode-871000) Calling .GetSSHPort
	I0719 11:57:37.209656    4498 main.go:141] libmachine: (multinode-871000) Calling .GetSSHKeyPath
	I0719 11:57:37.209747    4498 main.go:141] libmachine: (multinode-871000) Calling .GetSSHUsername
	I0719 11:57:37.209833    4498 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000/id_rsa Username:docker}
	I0719 11:57:37.240723    4498 ssh_runner.go:195] Run: systemctl --version
	I0719 11:57:37.244903    4498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 11:57:37.255497    4498 kubeconfig.go:125] found "multinode-871000" server: "https://192.169.0.16:8443"
	I0719 11:57:37.255521    4498 api_server.go:166] Checking apiserver status ...
	I0719 11:57:37.255559    4498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:57:37.266227    4498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2046/cgroup
	W0719 11:57:37.273236    4498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2046/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 11:57:37.273277    4498 ssh_runner.go:195] Run: ls
	I0719 11:57:37.276448    4498 api_server.go:253] Checking apiserver healthz at https://192.169.0.16:8443/healthz ...
	I0719 11:57:37.279536    4498 api_server.go:279] https://192.169.0.16:8443/healthz returned 200:
	ok
	I0719 11:57:37.279547    4498 status.go:422] multinode-871000 apiserver status = Running (err=<nil>)
	I0719 11:57:37.279562    4498 status.go:257] multinode-871000 status: &{Name:multinode-871000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:57:37.279573    4498 status.go:255] checking status of multinode-871000-m02 ...
	I0719 11:57:37.279828    4498 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:57:37.279849    4498 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:57:37.288766    4498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53071
	I0719 11:57:37.289102    4498 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:57:37.289409    4498 main.go:141] libmachine: Using API Version  1
	I0719 11:57:37.289418    4498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:57:37.289657    4498 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:57:37.289794    4498 main.go:141] libmachine: (multinode-871000-m02) Calling .GetState
	I0719 11:57:37.289882    4498 main.go:141] libmachine: (multinode-871000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 11:57:37.289958    4498 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid from json: 4223
	I0719 11:57:37.291105    4498 status.go:330] multinode-871000-m02 host status = "Running" (err=<nil>)
	I0719 11:57:37.291115    4498 host.go:66] Checking if "multinode-871000-m02" exists ...
	I0719 11:57:37.291357    4498 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:57:37.291378    4498 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:57:37.300015    4498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53073
	I0719 11:57:37.300367    4498 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:57:37.300718    4498 main.go:141] libmachine: Using API Version  1
	I0719 11:57:37.300734    4498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:57:37.300951    4498 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:57:37.301060    4498 main.go:141] libmachine: (multinode-871000-m02) Calling .GetIP
	I0719 11:57:37.301142    4498 host.go:66] Checking if "multinode-871000-m02" exists ...
	I0719 11:57:37.301396    4498 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:57:37.301417    4498 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:57:37.309908    4498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53075
	I0719 11:57:37.310264    4498 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:57:37.310623    4498 main.go:141] libmachine: Using API Version  1
	I0719 11:57:37.310638    4498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:57:37.310848    4498 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:57:37.310957    4498 main.go:141] libmachine: (multinode-871000-m02) Calling .DriverName
	I0719 11:57:37.311103    4498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:57:37.311115    4498 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHHostname
	I0719 11:57:37.311221    4498 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHPort
	I0719 11:57:37.311323    4498 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHKeyPath
	I0719 11:57:37.311417    4498 main.go:141] libmachine: (multinode-871000-m02) Calling .GetSSHUsername
	I0719 11:57:37.311520    4498 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1053/.minikube/machines/multinode-871000-m02/id_rsa Username:docker}
	I0719 11:57:37.341328    4498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 11:57:37.351552    4498 status.go:257] multinode-871000-m02 status: &{Name:multinode-871000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:57:37.351569    4498 status.go:255] checking status of multinode-871000-m03 ...
	I0719 11:57:37.351837    4498 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 11:57:37.351858    4498 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 11:57:37.360517    4498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53078
	I0719 11:57:37.360936    4498 main.go:141] libmachine: () Calling .GetVersion
	I0719 11:57:37.361289    4498 main.go:141] libmachine: Using API Version  1
	I0719 11:57:37.361305    4498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 11:57:37.361501    4498 main.go:141] libmachine: () Calling .GetMachineName
	I0719 11:57:37.361611    4498 main.go:141] libmachine: (multinode-871000-m03) Calling .GetState
	I0719 11:57:37.361710    4498 main.go:141] libmachine: (multinode-871000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 11:57:37.361778    4498 main.go:141] libmachine: (multinode-871000-m03) DBG | hyperkit pid from json: 4291
	I0719 11:57:37.362899    4498 main.go:141] libmachine: (multinode-871000-m03) DBG | hyperkit pid 4291 missing from process table
	I0719 11:57:37.362918    4498 status.go:330] multinode-871000-m03 host status = "Stopped" (err=<nil>)
	I0719 11:57:37.362933    4498 status.go:343] host is not running, skipping remaining checks
	I0719 11:57:37.362940    4498 status.go:257] multinode-871000-m03 status: &{Name:multinode-871000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.82s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (156.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-871000 node start m03 -v=7 --alsologtostderr: (2m35.737116805s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (156.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (9.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-871000 node delete m03: (8.829025372s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (9.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-871000 stop: (16.634262635s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-871000 status: exit status 7 (84.838821ms)

                                                
                                                
-- stdout --
	multinode-871000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-871000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-871000 status --alsologtostderr: exit status 7 (78.330093ms)

                                                
                                                
-- stdout --
	multinode-871000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-871000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:04:03.487856    4928 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:04:03.488125    4928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:04:03.488130    4928 out.go:304] Setting ErrFile to fd 2...
	I0719 12:04:03.488133    4928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:04:03.488309    4928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1053/.minikube/bin
	I0719 12:04:03.488478    4928 out.go:298] Setting JSON to false
	I0719 12:04:03.488505    4928 mustload.go:65] Loading cluster: multinode-871000
	I0719 12:04:03.488544    4928 notify.go:220] Checking for updates...
	I0719 12:04:03.488835    4928 config.go:182] Loaded profile config "multinode-871000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:04:03.488850    4928 status.go:255] checking status of multinode-871000 ...
	I0719 12:04:03.489224    4928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:04:03.489281    4928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:04:03.498296    4928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53323
	I0719 12:04:03.498627    4928 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:04:03.499066    4928 main.go:141] libmachine: Using API Version  1
	I0719 12:04:03.499077    4928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:04:03.499272    4928 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:04:03.499376    4928 main.go:141] libmachine: (multinode-871000) Calling .GetState
	I0719 12:04:03.499475    4928 main.go:141] libmachine: (multinode-871000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:04:03.499546    4928 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid from json: 4843
	I0719 12:04:03.500467    4928 main.go:141] libmachine: (multinode-871000) DBG | hyperkit pid 4843 missing from process table
	I0719 12:04:03.500489    4928 status.go:330] multinode-871000 host status = "Stopped" (err=<nil>)
	I0719 12:04:03.500494    4928 status.go:343] host is not running, skipping remaining checks
	I0719 12:04:03.500501    4928 status.go:257] multinode-871000 status: &{Name:multinode-871000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 12:04:03.500528    4928 status.go:255] checking status of multinode-871000-m02 ...
	I0719 12:04:03.500772    4928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0719 12:04:03.500794    4928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0719 12:04:03.509078    4928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53325
	I0719 12:04:03.509412    4928 main.go:141] libmachine: () Calling .GetVersion
	I0719 12:04:03.509795    4928 main.go:141] libmachine: Using API Version  1
	I0719 12:04:03.509813    4928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 12:04:03.510056    4928 main.go:141] libmachine: () Calling .GetMachineName
	I0719 12:04:03.510176    4928 main.go:141] libmachine: (multinode-871000-m02) Calling .GetState
	I0719 12:04:03.510278    4928 main.go:141] libmachine: (multinode-871000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0719 12:04:03.510342    4928 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid from json: 4857
	I0719 12:04:03.511233    4928 main.go:141] libmachine: (multinode-871000-m02) DBG | hyperkit pid 4857 missing from process table
	I0719 12:04:03.511264    4928 status.go:330] multinode-871000-m02 host status = "Stopped" (err=<nil>)
	I0719 12:04:03.511273    4928 status.go:343] host is not running, skipping remaining checks
	I0719 12:04:03.511281    4928 status.go:257] multinode-871000-m02 status: &{Name:multinode-871000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (112.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-871000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0719 12:05:15.212342    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-871000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m52.215205295s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-871000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (112.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-871000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-871000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-871000-m02 --driver=hyperkit : exit status 14 (441.293147ms)

                                                
                                                
-- stdout --
	* [multinode-871000-m02] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-871000-m02' is duplicated with machine name 'multinode-871000-m02' in profile 'multinode-871000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-871000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-871000-m03 --driver=hyperkit : (40.058096559s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-871000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-871000: exit status 80 (266.371245ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-871000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-871000-m03 already exists in multinode-871000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-871000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-871000-m03: (5.253152514s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.08s)

                                                
                                    
x
+
TestPreload (205.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-319000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0719 12:07:04.442175    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 12:07:12.163481    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-319000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (2m2.618782383s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-319000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-319000 image pull gcr.io/k8s-minikube/busybox: (1.235143337s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-319000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-319000: (8.38800008s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-319000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-319000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m7.972002528s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-319000 image list
helpers_test.go:175: Cleaning up "test-preload-319000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-319000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-319000: (5.236343899s)
--- PASS: TestPreload (205.61s)

                                                
                                    
x
+
TestScheduledStopUnix (110.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-427000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-427000 --memory=2048 --driver=hyperkit : (38.597105834s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-427000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-427000 -n scheduled-stop-427000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-427000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-427000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-427000 -n scheduled-stop-427000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-427000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-427000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0719 12:11:47.498009    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 12:12:04.440885    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-427000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-427000: exit status 7 (72.121808ms)

                                                
                                                
-- stdout --
	scheduled-stop-427000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-427000 -n scheduled-stop-427000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-427000 -n scheduled-stop-427000: exit status 7 (65.831272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-427000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-427000
--- PASS: TestScheduledStopUnix (110.03s)

                                                
                                    
x
+
TestSkaffold (227.97s)

                                                
                                                
=== RUN   TestSkaffold
E0719 12:12:12.161199    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2484087219 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2484087219 version: (1.708445834s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-514000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-514000 --memory=2600 --driver=hyperkit : (2m32.31494904s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2484087219 run --minikube-profile skaffold-514000 --kube-context skaffold-514000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2484087219 run --minikube-profile skaffold-514000 --kube-context skaffold-514000 --status-check=true --port-forward=false --interactive=false: (56.059909582s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-59b465b66f-fl4qx" [a93c5b33-48ff-44bf-b253-058460b13a01] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005218179s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-fb9c78d65-zqlb6" [ce87ebc8-1c06-4302-8974-c0eb280f953e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005597056s
helpers_test.go:175: Cleaning up "skaffold-514000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-514000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-514000: (5.243711972s)
--- PASS: TestSkaffold (227.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (83.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2351121477 start -p running-upgrade-036000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2351121477 start -p running-upgrade-036000 --memory=2200 --vm-driver=hyperkit : (55.708404642s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-036000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-036000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (21.348215837s)
helpers_test.go:175: Cleaning up "running-upgrade-036000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-036000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-036000: (5.240484998s)
--- PASS: TestRunningBinaryUpgrade (83.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (121.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-381000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-381000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (49.482125915s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-381000
E0719 12:21:24.247020    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-381000: (8.411379619s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-381000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-381000 status --format={{.Host}}: exit status 7 (69.288304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-381000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
E0719 12:21:55.240866    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-381000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : (33.519903163s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-381000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-381000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-381000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (460.929318ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-381000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-381000
	    minikube start -p kubernetes-upgrade-381000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3810002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-381000 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-381000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
E0719 12:22:04.470240    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 12:22:05.207542    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:22:12.189454    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-381000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : (24.321750776s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-381000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-381000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-381000: (5.243574407s)
--- PASS: TestKubernetesUpgrade (121.56s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.4s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19307
- KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current423820082/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current423820082/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current423820082/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current423820082/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.40s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.21s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19307
- KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current640332225/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current640332225/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current640332225/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current640332225/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1906690222 start -p stopped-upgrade-582000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1906690222 start -p stopped-upgrade-582000 --memory=2200 --vm-driver=hyperkit : (38.708541927s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1906690222 -p stopped-upgrade-582000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1906690222 -p stopped-upgrade-582000 stop: (8.254366002s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-582000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0719 12:23:27.129652    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-582000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (36.211627449s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-582000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-582000: (3.28774166s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.29s)

                                                
                                    
x
+
TestPause/serial/Start (90.51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-624000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-624000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (1m30.513800742s)
--- PASS: TestPause/serial/Start (90.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-328000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-328000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (457.456442ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-328000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1053/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1053/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (52.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-328000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-328000 --driver=hyperkit : (52.040220781s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-328000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (52.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-328000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-328000 --no-kubernetes --driver=hyperkit : (6.012483434s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-328000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-328000 status -o json: exit status 2 (144.664032ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-328000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-328000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-328000: (2.408548896s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (21.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-328000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-328000 --no-kubernetes --driver=hyperkit : (21.166478539s)
--- PASS: TestNoKubernetes/serial/Start (21.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-328000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-328000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (124.733145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-328000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-328000: (8.429355563s)
--- PASS: TestNoKubernetes/serial/Stop (8.43s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.69s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-624000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-624000 --alsologtostderr -v=1 --driver=hyperkit : (41.671353917s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-328000 --driver=hyperkit 
E0719 12:25:43.281535    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-328000 --driver=hyperkit : (19.675782107s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-328000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-328000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (128.586786ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
E0719 12:26:10.972669    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (56.30260077s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-624000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.57s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-624000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-624000 --output=json --layout=cluster: exit status 2 (155.418361ms)

                                                
                                                
-- stdout --
	{"Name":"pause-624000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-624000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-624000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.54s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.59s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-624000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.59s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.24s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-624000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-624000 --alsologtostderr -v=5: (5.236299715s)
--- PASS: TestPause/serial/DeletePaused (5.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m12.708633809s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-204000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-204000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b8dfr" [0a282fb4-7080-4ace-9467-8a9c8d77bd40] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-b8dfr" [0a282fb4-7080-4ace-9467-8a9c8d77bd40] Running
E0719 12:27:04.471796    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003569221s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-204000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m12.28581573s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bspv4" [daa6d04f-5a8c-4741-99c3-dc585d4a2d6b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003454853s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-204000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-204000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9t6xk" [c38cb9b3-5304-4711-918c-c72df3411c45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9t6xk" [c38cb9b3-5304-4711-918c-c72df3411c45] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003528181s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-204000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (177.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
E0719 12:28:27.528849    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (2m57.976641897s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (177.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-67fcb" [354f519e-e4e2-4269-b4d2-4bf7fd2440c7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003166148s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-204000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-204000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-njlbc" [0d747a92-13b6-44b6-a5a3-0b03eef2bd70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-njlbc" [0d747a92-13b6-44b6-a5a3-0b03eef2bd70] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005339972s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-204000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (55.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (55.253182295s)
--- PASS: TestNetworkPlugins/group/false/Start (55.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-204000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-204000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pmw7b" [207257c1-c13c-4711-a393-68c177e530bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pmw7b" [207257c1-c13c-4711-a393-68c177e530bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.004328535s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-204000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
E0719 12:30:43.282029    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (52.624474165s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-204000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-204000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-md4hq" [33c1ba89-1b59-451c-8344-c79d6186ed9c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-md4hq" [33c1ba89-1b59-451c-8344-c79d6186ed9c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.002515505s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-204000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-204000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-204000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5cqxn" [93520ee4-9cae-409c-9a93-6333f3d46d87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5cqxn" [93520ee4-9cae-409c-9a93-6333f3d46d87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003311375s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m2.621039964s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-204000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (166.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0719 12:31:56.191100    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
E0719 12:31:58.751365    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
E0719 12:32:03.871926    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
E0719 12:32:04.471474    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 12:32:12.191728    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 12:32:14.113583    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
E0719 12:32:34.440507    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:32:34.445630    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:32:34.456403    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:32:34.476840    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:32:34.519056    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:32:34.593902    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
E0719 12:32:34.599196    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:32:34.759287    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:32:35.079575    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:32:35.720047    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:32:37.000548    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (2m46.202760459s)
--- PASS: TestNetworkPlugins/group/bridge/Start (166.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bmwck" [52443410-668c-4bf5-9af3-5a69ed5f8a67] Running
E0719 12:32:39.562561    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004493412s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-204000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-204000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bf8l9" [08ddefbf-8d81-424b-8924-6c2089745049] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0719 12:32:44.684044    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-bf8l9" [08ddefbf-8d81-424b-8924-6c2089745049] Running
E0719 12:32:54.924536    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004004488s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-204000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0719 12:33:15.406878    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:33:15.554726    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
E0719 12:33:34.346679    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:34.351787    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:34.361946    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:34.382076    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:34.423468    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:34.503602    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:34.664499    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:34.984821    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:35.624979    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:36.905299    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:39.467574    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:44.589603    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:54.831230    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:33:56.367219    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:34:15.312576    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:34:37.475505    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-204000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (1m31.99921296s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (92.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-204000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-204000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nql5v" [d82c53ae-ce2f-4d43-b3ef-7a3648e5e038] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nql5v" [d82c53ae-ce2f-4d43-b3ef-7a3648e5e038] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004192024s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-204000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-204000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-67w7q" [1545eee3-e603-4854-8d2c-3dfbf2074ae3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-67w7q" [1545eee3-e603-4854-8d2c-3dfbf2074ae3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003285977s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-204000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-204000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-204000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (173.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-082000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0719 12:35:10.079363    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-082000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m53.194368875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (173.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (210.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-954000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0719 12:35:15.200931    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
E0719 12:35:18.287774    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:35:25.441915    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
E0719 12:35:43.283973    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:35:45.923468    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
E0719 12:36:06.639578    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:06.645658    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:06.657760    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:06.678428    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:06.720606    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:06.802155    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:06.963662    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:07.284863    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:07.925003    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:09.207212    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:11.767907    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:16.888836    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:18.194657    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:36:26.883940    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
E0719 12:36:27.129526    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:27.289398    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:27.295846    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:27.306535    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:27.327743    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:27.368865    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:27.450962    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:27.611467    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:27.932714    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:28.573611    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:29.855258    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:32.416792    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:37.537662    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:47.610891    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:36:47.780110    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:36:53.627315    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
E0719 12:37:04.493217    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 12:37:06.357331    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:37:08.283996    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:37:12.220083    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 12:37:21.348742    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
E0719 12:37:28.607929    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:37:34.477116    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:37:38.388777    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:38.394788    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:38.405576    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:38.426696    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:38.467681    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:38.548231    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:38.710498    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:39.030850    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:39.672712    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:40.954279    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:43.514708    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:48.637101    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:37:48.842651    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
E0719 12:37:49.259923    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:37:58.878328    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:38:02.168541    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-954000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (3m30.224950301s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (210.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-082000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6984d656-8c36-4c13-a463-3139fe606bfd] Pending
helpers_test.go:344: "busybox" [6984d656-8c36-4c13-a463-3139fe606bfd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6984d656-8c36-4c13-a463-3139fe606bfd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003099463s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-082000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-082000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-082000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-082000 --alsologtostderr -v=3
E0719 12:38:19.360720    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-082000 --alsologtostderr -v=3: (8.39131543s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-082000 -n old-k8s-version-082000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-082000 -n old-k8s-version-082000: exit status 7 (66.802441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-082000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (403.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-082000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0719 12:38:34.387603    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:38:35.286671    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-082000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m43.588341277s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-082000 -n old-k8s-version-082000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (403.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-954000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8694c7d1-8cdc-4b4f-b460-b9649dd11f89] Pending
helpers_test.go:344: "busybox" [8694c7d1-8cdc-4b4f-b460-b9649dd11f89] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8694c7d1-8cdc-4b4f-b460-b9649dd11f89] Running
E0719 12:38:50.534943    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00692446s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-954000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-954000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-954000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-954000 --alsologtostderr -v=3
E0719 12:39:00.323602    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-954000 --alsologtostderr -v=3: (8.454557367s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-954000 -n no-preload-954000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-954000 -n no-preload-954000: exit status 7 (67.042457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-954000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0719 12:39:02.075708    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (292.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-954000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0719 12:39:11.183337    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:39:42.428606    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:42.434888    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:42.445530    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:42.467627    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:42.509512    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:42.590354    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:42.750458    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:43.070545    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:43.710740    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:44.990913    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:45.769892    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:45.776282    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:45.788411    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:45.809997    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:45.852080    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:45.932260    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:46.092425    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:46.413164    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:47.053500    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:47.551356    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:48.334649    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:50.895766    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:39:52.671566    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:39:56.017315    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:40:02.911830    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:40:04.998012    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
E0719 12:40:06.257842    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:40:22.246189    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:40:23.393196    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:40:26.739164    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:40:32.686475    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
E0719 12:40:43.326017    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:41:04.354262    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:41:06.684112    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:41:07.700795    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:41:27.332924    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:41:34.378017    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:41:53.671746    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
E0719 12:41:55.028078    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:42:04.516701    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 12:42:12.236798    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
E0719 12:42:26.275741    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:42:29.622464    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:42:34.484446    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:42:38.394564    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:43:06.089565    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:43:34.392944    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-954000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (4m52.28479089s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-954000 -n no-preload-954000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (292.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-rqvpc" [98a70383-85c6-4834-a9ad-1ff4a5aed940] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004350738s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-rqvpc" [98a70383-85c6-4834-a9ad-1ff4a5aed940] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003838607s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-954000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-954000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-954000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-954000 -n no-preload-954000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-954000 -n no-preload-954000: exit status 2 (152.420968ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-954000 -n no-preload-954000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-954000 -n no-preload-954000: exit status 2 (152.87747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-954000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-954000 -n no-preload-954000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-954000 -n no-preload-954000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-643000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3
E0719 12:44:42.432445    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:44:45.776400    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:45:05.001872    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-643000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3: (1m29.708828812s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4dccz" [e986030d-5dd0-4a8a-97c7-bf5d2784fdd2] Running
E0719 12:45:07.577555    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 12:45:10.120056    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004216722s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4dccz" [e986030d-5dd0-4a8a-97c7-bf5d2784fdd2] Running
E0719 12:45:13.465278    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008399177s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-082000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-082000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-082000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-082000 -n old-k8s-version-082000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-082000 -n old-k8s-version-082000: exit status 2 (162.985639ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-082000 -n old-k8s-version-082000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-082000 -n old-k8s-version-082000: exit status 2 (160.844555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-082000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-082000 -n old-k8s-version-082000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-082000 -n old-k8s-version-082000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-887000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-887000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3: (1m32.316835871s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-643000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2f4bd909-72a0-4473-a75a-84e6b594a55c] Pending
E0719 12:45:43.330537    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2f4bd909-72a0-4473-a75a-84e6b594a55c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2f4bd909-72a0-4473-a75a-84e6b594a55c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.002530466s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-643000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-643000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-643000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-643000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-643000 --alsologtostderr -v=3: (8.48820501s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-643000 -n embed-certs-643000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-643000 -n embed-certs-643000: exit status 7 (68.290747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-643000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (290.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-643000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3
E0719 12:46:06.687882    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
E0719 12:46:27.335748    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:46:53.674747    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-643000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3: (4m49.850623449s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-643000 -n embed-certs-643000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (290.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-887000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4c1bf310-17e5-4dd6-aa87-239a9472b018] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4c1bf310-17e5-4dd6-aa87-239a9472b018] Running
E0719 12:47:04.520456    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004797495s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-887000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-887000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-887000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-887000 --alsologtostderr -v=3
E0719 12:47:12.241470    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-887000 --alsologtostderr -v=3: (8.40891877s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-887000 -n default-k8s-diff-port-887000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-887000 -n default-k8s-diff-port-887000: exit status 7 (66.970378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-887000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (312.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-887000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3
E0719 12:47:34.489053    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:47:38.398612    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/flannel-204000/client.crt: no such file or directory
E0719 12:48:03.374924    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:03.380528    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:03.390866    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:03.412533    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:03.453260    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:03.534954    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:03.695815    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:04.017536    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:04.659604    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:05.939857    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:08.501170    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:13.622657    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:16.725548    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
E0719 12:48:23.863763    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:34.397853    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:48:44.345471    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:48:44.561084    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:44.566784    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:44.577414    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:44.597791    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:44.638832    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:44.720794    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:44.881227    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:45.202773    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:45.843245    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:47.123422    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:49.684672    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:54.805607    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:48:57.539331    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kindnet-204000/client.crt: no such file or directory
E0719 12:49:05.046398    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:49:25.306849    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
E0719 12:49:25.527711    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:49:42.438172    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/bridge-204000/client.crt: no such file or directory
E0719 12:49:45.780750    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/kubenet-204000/client.crt: no such file or directory
E0719 12:49:57.446813    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/calico-204000/client.crt: no such file or directory
E0719 12:50:05.007462    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
E0719 12:50:06.488463    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
E0719 12:50:43.334587    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/skaffold-514000/client.crt: no such file or directory
E0719 12:50:47.230205    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/old-k8s-version-082000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-887000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3: (5m11.98270718s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-887000 -n default-k8s-diff-port-887000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (312.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rcr2n" [9df5feba-c187-4295-9815-fbdff61be881] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005037459s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rcr2n" [9df5feba-c187-4295-9815-fbdff61be881] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00375995s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-643000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-643000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-643000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-643000 -n embed-certs-643000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-643000 -n embed-certs-643000: exit status 2 (158.82243ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-643000 -n embed-certs-643000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-643000 -n embed-certs-643000: exit status 2 (158.326294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-643000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-643000 -n embed-certs-643000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-643000 -n embed-certs-643000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-669000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0719 12:51:27.341323    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/enable-default-cni-204000/client.crt: no such file or directory
E0719 12:51:28.057100    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/false-204000/client.crt: no such file or directory
E0719 12:51:28.409832    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/no-preload-954000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-669000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (41.579796464s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-669000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-669000 --alsologtostderr -v=3
E0719 12:51:53.680902    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/auto-204000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-669000 --alsologtostderr -v=3: (8.470184644s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-669000 -n newest-cni-669000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-669000 -n newest-cni-669000: exit status 7 (66.06012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-669000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-669000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0719 12:52:04.525219    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/addons-910000/client.crt: no such file or directory
E0719 12:52:12.245815    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/functional-462000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-669000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (29.790196155s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-669000 -n newest-cni-669000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-gq5x4" [5caf77c4-847e-4d9d-95f7-2bfb3972320f] Running
E0719 12:52:29.748988    1592 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1053/.minikube/profiles/custom-flannel-204000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002429341s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-669000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-669000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-669000 -n newest-cni-669000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-669000 -n newest-cni-669000: exit status 2 (164.110608ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-669000 -n newest-cni-669000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-669000 -n newest-cni-669000: exit status 2 (160.098021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-669000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-669000 -n newest-cni-669000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-669000 -n newest-cni-669000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-gq5x4" [5caf77c4-847e-4d9d-95f7-2bfb3972320f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002982678s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-887000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-887000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-887000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-887000 -n default-k8s-diff-port-887000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-887000 -n default-k8s-diff-port-887000: exit status 2 (162.311027ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-887000 -n default-k8s-diff-port-887000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-887000 -n default-k8s-diff-port-887000: exit status 2 (160.609537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-887000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-887000 -n default-k8s-diff-port-887000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-887000 -n default-k8s-diff-port-887000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.95s)

                                                
                                    

Test skip (22/344)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-204000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-204000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-204000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-204000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204000"

                                                
                                                
----------------------- debugLogs end: cilium-204000 [took: 5.617350678s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-204000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-204000
--- SKIP: TestNetworkPlugins/group/cilium (5.89s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-629000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard