Test Report: Hyperkit_macOS 17734

                    
                      1d1c6f3c143e2d28fe63167ba90e3265538c6a3a:2023-12-12:32255
                    
                

Test fail (22/323)

x
+
TestFunctional/serial/StartWithProxy (15.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-303000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2233: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-303000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : exit status 90 (15.335571184s)

                                                
                                                
-- stdout --
	* [functional-303000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node functional-303000 in cluster functional-303000
	* Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Found network options:
	  - HTTP_PROXY=localhost:49838
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49838 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:49838 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.169.0.5).
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 20:03:23 UTC, ends at Tue 2023-12-12 20:03:30 UTC. --
	Dec 12 20:03:24 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:03:24 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:03:27 functional-303000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:03:30 functional-303000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:03:30 functional-303000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:03:30 functional-303000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:03:30 functional-303000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 20:03:30 functional-303000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2235: failed minikube start. args "out/minikube-darwin-amd64 start -p functional-303000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit ": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000: exit status 6 (151.354397ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:03:30.235199    4262 status.go:415] kubeconfig endpoint: extract IP: "functional-303000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-303000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/StartWithProxy (15.49s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-303000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-303000 --alsologtostderr -v=8: exit status 90 (3.981288886s)

                                                
                                                
-- stdout --
	* [functional-303000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting control plane node functional-303000 in cluster functional-303000
	* Updating the running hyperkit "functional-303000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:03:30.300506    4267 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:03:30.300804    4267 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:03:30.300809    4267 out.go:309] Setting ErrFile to fd 2...
	I1212 12:03:30.300813    4267 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:03:30.301000    4267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:03:30.302436    4267 out.go:303] Setting JSON to false
	I1212 12:03:30.325797    4267 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1981,"bootTime":1702409429,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 12:03:30.325905    4267 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 12:03:30.349891    4267 out.go:177] * [functional-303000] minikube v1.32.0 on Darwin 14.2
	I1212 12:03:30.413719    4267 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 12:03:30.391766    4267 notify.go:220] Checking for updates...
	I1212 12:03:30.456626    4267 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:03:30.498654    4267 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 12:03:30.519766    4267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 12:03:30.540557    4267 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:03:30.561752    4267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 12:03:30.583574    4267 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:03:30.583761    4267 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 12:03:30.584542    4267 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:03:30.584618    4267 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:03:30.593571    4267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49876
	I1212 12:03:30.593960    4267 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:03:30.594377    4267 main.go:141] libmachine: Using API Version  1
	I1212 12:03:30.594389    4267 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:03:30.594610    4267 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:03:30.594735    4267 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:03:30.622428    4267 out.go:177] * Using the hyperkit driver based on existing profile
	I1212 12:03:30.643662    4267 start.go:298] selected driver: hyperkit
	I1212 12:03:30.643679    4267 start.go:902] validating driver "hyperkit" against &{Name:functional-303000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-303000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:03:30.643811    4267 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 12:03:30.643955    4267 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:03:30.644069    4267 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17734-1975/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 12:03:30.652742    4267 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 12:03:30.656776    4267 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:03:30.656803    4267 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 12:03:30.659627    4267 cni.go:84] Creating CNI manager for ""
	I1212 12:03:30.659650    4267 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 12:03:30.659664    4267 start_flags.go:323] config:
	{Name:functional-303000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-303000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:03:30.659857    4267 iso.go:125] acquiring lock: {Name:mkd640d41cda61c79a7d2c2e38355d745b556a2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:03:30.689438    4267 out.go:177] * Starting control plane node functional-303000 in cluster functional-303000
	I1212 12:03:30.765739    4267 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 12:03:30.765819    4267 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 12:03:30.765849    4267 cache.go:56] Caching tarball of preloaded images
	I1212 12:03:30.766071    4267 preload.go:174] Found /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 12:03:30.766107    4267 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 12:03:30.766326    4267 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/config.json ...
	I1212 12:03:30.767176    4267 start.go:365] acquiring machines lock for functional-303000: {Name:mkcfb9a2794178bbcff953e64f7f6a3e3b1e9997 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 12:03:30.767274    4267 start.go:369] acquired machines lock for "functional-303000" in 73.48µs
	I1212 12:03:30.767309    4267 start.go:96] Skipping create...Using existing machine configuration
	I1212 12:03:30.767324    4267 fix.go:54] fixHost starting: 
	I1212 12:03:30.767733    4267 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:03:30.767764    4267 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:03:30.776338    4267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49878
	I1212 12:03:30.776691    4267 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:03:30.777073    4267 main.go:141] libmachine: Using API Version  1
	I1212 12:03:30.777089    4267 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:03:30.777364    4267 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:03:30.777496    4267 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:03:30.777593    4267 main.go:141] libmachine: (functional-303000) Calling .GetState
	I1212 12:03:30.777679    4267 main.go:141] libmachine: (functional-303000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:03:30.777743    4267 main.go:141] libmachine: (functional-303000) DBG | hyperkit pid from json: 4245
	I1212 12:03:30.778757    4267 fix.go:102] recreateIfNeeded on functional-303000: state=Running err=<nil>
	W1212 12:03:30.778770    4267 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 12:03:30.801714    4267 out.go:177] * Updating the running hyperkit "functional-303000" VM ...
	I1212 12:03:30.842665    4267 machine.go:88] provisioning docker machine ...
	I1212 12:03:30.842704    4267 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:03:30.843024    4267 main.go:141] libmachine: (functional-303000) Calling .GetMachineName
	I1212 12:03:30.843262    4267 buildroot.go:166] provisioning hostname "functional-303000"
	I1212 12:03:30.843287    4267 main.go:141] libmachine: (functional-303000) Calling .GetMachineName
	I1212 12:03:30.843523    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:30.843744    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
	I1212 12:03:30.843956    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:30.844156    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:30.844347    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
	I1212 12:03:30.844589    4267 main.go:141] libmachine: Using SSH client type: native
	I1212 12:03:30.845121    4267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1212 12:03:30.845141    4267 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-303000 && echo "functional-303000" | sudo tee /etc/hostname
	I1212 12:03:30.922958    4267 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-303000
	
	I1212 12:03:30.922990    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:30.923153    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
	I1212 12:03:30.923232    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:30.923337    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:30.923440    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
	I1212 12:03:30.923583    4267 main.go:141] libmachine: Using SSH client type: native
	I1212 12:03:30.923851    4267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1212 12:03:30.923865    4267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-303000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-303000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-303000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 12:03:30.993512    4267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 12:03:30.993534    4267 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17734-1975/.minikube CaCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17734-1975/.minikube}
	I1212 12:03:30.993558    4267 buildroot.go:174] setting up certificates
	I1212 12:03:30.993572    4267 provision.go:83] configureAuth start
	I1212 12:03:30.993581    4267 main.go:141] libmachine: (functional-303000) Calling .GetMachineName
	I1212 12:03:30.993756    4267 main.go:141] libmachine: (functional-303000) Calling .GetIP
	I1212 12:03:30.993861    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:30.993943    4267 provision.go:138] copyHostCerts
	I1212 12:03:30.993974    4267 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem
	I1212 12:03:30.994023    4267 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem, removing ...
	I1212 12:03:30.994029    4267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem
	I1212 12:03:30.994154    4267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem (1123 bytes)
	I1212 12:03:30.994390    4267 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem
	I1212 12:03:30.994418    4267 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem, removing ...
	I1212 12:03:30.994422    4267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem
	I1212 12:03:30.994499    4267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem (1675 bytes)
	I1212 12:03:30.994652    4267 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem
	I1212 12:03:30.994678    4267 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem, removing ...
	I1212 12:03:30.994683    4267 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem
	I1212 12:03:30.994751    4267 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem (1078 bytes)
	I1212 12:03:30.994924    4267 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem org=jenkins.functional-303000 san=[192.169.0.5 192.169.0.5 localhost 127.0.0.1 minikube functional-303000]
	I1212 12:03:31.185132    4267 provision.go:172] copyRemoteCerts
	I1212 12:03:31.185190    4267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 12:03:31.185208    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:31.185436    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
	I1212 12:03:31.185651    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.185800    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
	I1212 12:03:31.185979    4267 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/functional-303000/id_rsa Username:docker}
	I1212 12:03:31.226710    4267 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 12:03:31.226770    4267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 12:03:31.242079    4267 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 12:03:31.242132    4267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1212 12:03:31.259092    4267 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 12:03:31.259141    4267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 12:03:31.274782    4267 provision.go:86] duration metric: configureAuth took 281.208895ms
	I1212 12:03:31.274800    4267 buildroot.go:189] setting minikube options for container-runtime
	I1212 12:03:31.274920    4267 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:03:31.274936    4267 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:03:31.275074    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:31.275161    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
	I1212 12:03:31.275253    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.275327    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.275409    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
	I1212 12:03:31.275511    4267 main.go:141] libmachine: Using SSH client type: native
	I1212 12:03:31.275754    4267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1212 12:03:31.275762    4267 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 12:03:31.346176    4267 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 12:03:31.346188    4267 buildroot.go:70] root file system type: tmpfs
	I1212 12:03:31.346270    4267 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 12:03:31.346286    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:31.346431    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
	I1212 12:03:31.346523    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.346605    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.346731    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
	I1212 12:03:31.346951    4267 main.go:141] libmachine: Using SSH client type: native
	I1212 12:03:31.347290    4267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1212 12:03:31.347343    4267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 12:03:31.426705    4267 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 12:03:31.426729    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:31.426863    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
	I1212 12:03:31.426977    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.427062    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.427144    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
	I1212 12:03:31.427258    4267 main.go:141] libmachine: Using SSH client type: native
	I1212 12:03:31.427508    4267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1212 12:03:31.427521    4267 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 12:03:31.502917    4267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 12:03:31.502935    4267 machine.go:91] provisioned docker machine in 660.268159ms
	I1212 12:03:31.502959    4267 start.go:300] post-start starting for "functional-303000" (driver="hyperkit")
	I1212 12:03:31.502998    4267 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 12:03:31.503009    4267 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:03:31.503381    4267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 12:03:31.503394    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:31.503495    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
	I1212 12:03:31.503682    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.503886    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
	I1212 12:03:31.503997    4267 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/functional-303000/id_rsa Username:docker}
	I1212 12:03:31.545612    4267 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 12:03:31.548194    4267 command_runner.go:130] > NAME=Buildroot
	I1212 12:03:31.548204    4267 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 12:03:31.548208    4267 command_runner.go:130] > ID=buildroot
	I1212 12:03:31.548214    4267 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 12:03:31.548226    4267 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 12:03:31.548374    4267 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 12:03:31.548384    4267 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/addons for local assets ...
	I1212 12:03:31.548457    4267 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/files for local assets ...
	I1212 12:03:31.548593    4267 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> 31982.pem in /etc/ssl/certs
	I1212 12:03:31.548599    4267 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> /etc/ssl/certs/31982.pem
	I1212 12:03:31.548750    4267 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/test/nested/copy/3198/hosts -> hosts in /etc/test/nested/copy/3198
	I1212 12:03:31.548755    4267 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/test/nested/copy/3198/hosts -> /etc/test/nested/copy/3198/hosts
	I1212 12:03:31.548798    4267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3198
	I1212 12:03:31.555293    4267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem --> /etc/ssl/certs/31982.pem (1708 bytes)
	I1212 12:03:31.570919    4267 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/test/nested/copy/3198/hosts --> /etc/test/nested/copy/3198/hosts (40 bytes)
	I1212 12:03:31.586451    4267 start.go:303] post-start completed in 83.48666ms
	I1212 12:03:31.586467    4267 fix.go:56] fixHost completed within 819.174885ms
	I1212 12:03:31.586481    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:31.586605    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
	I1212 12:03:31.586715    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.586814    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.586904    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
	I1212 12:03:31.587015    4267 main.go:141] libmachine: Using SSH client type: native
	I1212 12:03:31.587260    4267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1212 12:03:31.587268    4267 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 12:03:31.656401    4267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702411411.765762644
	
	I1212 12:03:31.656415    4267 fix.go:206] guest clock: 1702411411.765762644
	I1212 12:03:31.656425    4267 fix.go:219] Guest: 2023-12-12 12:03:31.765762644 -0800 PST Remote: 2023-12-12 12:03:31.586469 -0800 PST m=+1.331220374 (delta=179.293644ms)
	I1212 12:03:31.656444    4267 fix.go:190] guest clock delta is within tolerance: 179.293644ms
	I1212 12:03:31.656447    4267 start.go:83] releasing machines lock for "functional-303000", held for 889.193649ms
	I1212 12:03:31.656463    4267 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:03:31.656598    4267 main.go:141] libmachine: (functional-303000) Calling .GetIP
	I1212 12:03:31.656683    4267 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:03:31.657009    4267 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:03:31.657137    4267 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:03:31.657222    4267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 12:03:31.657252    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:31.657296    4267 ssh_runner.go:195] Run: cat /version.json
	I1212 12:03:31.657308    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
	I1212 12:03:31.657343    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
	I1212 12:03:31.657419    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.657471    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
	I1212 12:03:31.657514    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
	I1212 12:03:31.657563    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
	I1212 12:03:31.657598    4267 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/functional-303000/id_rsa Username:docker}
	I1212 12:03:31.657652    4267 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
	I1212 12:03:31.657739    4267 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/functional-303000/id_rsa Username:docker}
	I1212 12:03:31.696282    4267 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 12:03:31.696606    4267 ssh_runner.go:195] Run: systemctl --version
	I1212 12:03:31.756783    4267 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 12:03:31.757622    4267 command_runner.go:130] > systemd 247 (247)
	I1212 12:03:31.757653    4267 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 12:03:31.757805    4267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 12:03:31.763353    4267 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 12:03:31.763560    4267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 12:03:31.763625    4267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 12:03:31.770056    4267 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 12:03:31.770071    4267 start.go:475] detecting cgroup driver to use...
	I1212 12:03:31.770177    4267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:03:31.781727    4267 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 12:03:31.782062    4267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 12:03:31.789241    4267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 12:03:31.796522    4267 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 12:03:31.796578    4267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 12:03:31.804023    4267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:03:31.811300    4267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 12:03:31.819339    4267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:03:31.826543    4267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 12:03:31.833736    4267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 12:03:31.840760    4267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 12:03:31.847116    4267 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 12:03:31.847364    4267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 12:03:31.854665    4267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:03:31.936731    4267 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 12:03:31.948696    4267 start.go:475] detecting cgroup driver to use...
	I1212 12:03:31.948766    4267 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 12:03:31.958446    4267 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 12:03:31.959113    4267 command_runner.go:130] > [Unit]
	I1212 12:03:31.959145    4267 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 12:03:31.959152    4267 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 12:03:31.959161    4267 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 12:03:31.959166    4267 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 12:03:31.959171    4267 command_runner.go:130] > StartLimitBurst=3
	I1212 12:03:31.959175    4267 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 12:03:31.959178    4267 command_runner.go:130] > [Service]
	I1212 12:03:31.959182    4267 command_runner.go:130] > Type=notify
	I1212 12:03:31.959185    4267 command_runner.go:130] > Restart=on-failure
	I1212 12:03:31.959193    4267 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 12:03:31.959207    4267 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 12:03:31.959213    4267 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 12:03:31.959219    4267 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 12:03:31.959225    4267 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 12:03:31.959231    4267 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 12:03:31.959241    4267 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 12:03:31.959254    4267 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 12:03:31.959260    4267 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 12:03:31.959266    4267 command_runner.go:130] > ExecStart=
	I1212 12:03:31.959279    4267 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I1212 12:03:31.959286    4267 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 12:03:31.959293    4267 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 12:03:31.959300    4267 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 12:03:31.959306    4267 command_runner.go:130] > LimitNOFILE=infinity
	I1212 12:03:31.959311    4267 command_runner.go:130] > LimitNPROC=infinity
	I1212 12:03:31.959317    4267 command_runner.go:130] > LimitCORE=infinity
	I1212 12:03:31.959323    4267 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 12:03:31.959342    4267 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 12:03:31.959369    4267 command_runner.go:130] > TasksMax=infinity
	I1212 12:03:31.959373    4267 command_runner.go:130] > TimeoutStartSec=0
	I1212 12:03:31.959379    4267 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 12:03:31.959383    4267 command_runner.go:130] > Delegate=yes
	I1212 12:03:31.959388    4267 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 12:03:31.959392    4267 command_runner.go:130] > KillMode=process
	I1212 12:03:31.959395    4267 command_runner.go:130] > [Install]
	I1212 12:03:31.959404    4267 command_runner.go:130] > WantedBy=multi-user.target
	I1212 12:03:31.959584    4267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:03:31.969697    4267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 12:03:31.983852    4267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:03:31.993009    4267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:03:32.002015    4267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:03:32.014881    4267 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 12:03:32.014949    4267 ssh_runner.go:195] Run: which cri-dockerd
	I1212 12:03:32.017080    4267 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 12:03:32.017317    4267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 12:03:32.023456    4267 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 12:03:32.034411    4267 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 12:03:32.129675    4267 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 12:03:32.224960    4267 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 12:03:32.225042    4267 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 12:03:32.236849    4267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:03:32.333908    4267 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 12:03:33.682630    4267 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.348750347s)
	I1212 12:03:33.682698    4267 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:03:33.769790    4267 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 12:03:33.864520    4267 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:03:33.960817    4267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:03:34.050442    4267 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 12:03:34.060185    4267 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1212 12:03:34.060247    4267 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1212 12:03:34.068030    4267 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 20:03:23 UTC, ends at Tue 2023-12-12 20:03:34 UTC. --
	I1212 12:03:34.068043    4267 command_runner.go:130] > Dec 12 20:03:24 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1212 12:03:34.068048    4267 command_runner.go:130] > Dec 12 20:03:24 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 12:03:34.068056    4267 command_runner.go:130] > Dec 12 20:03:27 functional-303000 systemd[1]: cri-docker.socket: Succeeded.
	I1212 12:03:34.068062    4267 command_runner.go:130] > Dec 12 20:03:27 functional-303000 systemd[1]: Closed CRI Docker Socket for the API.
	I1212 12:03:34.068068    4267 command_runner.go:130] > Dec 12 20:03:27 functional-303000 systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 12:03:34.068076    4267 command_runner.go:130] > Dec 12 20:03:27 functional-303000 systemd[1]: Starting CRI Docker Socket for the API.
	I1212 12:03:34.068082    4267 command_runner.go:130] > Dec 12 20:03:27 functional-303000 systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 12:03:34.068087    4267 command_runner.go:130] > Dec 12 20:03:30 functional-303000 systemd[1]: cri-docker.socket: Succeeded.
	I1212 12:03:34.068094    4267 command_runner.go:130] > Dec 12 20:03:30 functional-303000 systemd[1]: Closed CRI Docker Socket for the API.
	I1212 12:03:34.068099    4267 command_runner.go:130] > Dec 12 20:03:30 functional-303000 systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 12:03:34.068108    4267 command_runner.go:130] > Dec 12 20:03:30 functional-303000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	I1212 12:03:34.068114    4267 command_runner.go:130] > Dec 12 20:03:30 functional-303000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	I1212 12:03:34.068120    4267 command_runner.go:130] > Dec 12 20:03:34 functional-303000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	I1212 12:03:34.068127    4267 command_runner.go:130] > Dec 12 20:03:34 functional-303000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	I1212 12:03:34.092238    4267 out.go:177] 
	W1212 12:03:34.113808    4267 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 20:03:23 UTC, ends at Tue 2023-12-12 20:03:34 UTC. --
	Dec 12 20:03:24 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:03:24 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:03:27 functional-303000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:03:30 functional-303000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:03:30 functional-303000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:03:30 functional-303000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:03:30 functional-303000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 20:03:30 functional-303000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	Dec 12 20:03:34 functional-303000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 20:03:34 functional-303000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 20:03:23 UTC, ends at Tue 2023-12-12 20:03:34 UTC. --
	Dec 12 20:03:24 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:03:24 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:03:27 functional-303000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:03:27 functional-303000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:03:30 functional-303000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:03:30 functional-303000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:03:30 functional-303000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:03:30 functional-303000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 20:03:30 functional-303000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	Dec 12 20:03:34 functional-303000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 20:03:34 functional-303000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1212 12:03:34.113838    4267 out.go:239] * 
	* 
	W1212 12:03:34.115069    4267 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 12:03:34.182479    4267 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-amd64 start -p functional-303000 --alsologtostderr -v=8": exit status 90
functional_test.go:659: soft start took 3.986894785s for "functional-303000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000: exit status 6 (150.79898ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:03:34.373713    4282 status.go:415] kubeconfig endpoint: extract IP: "functional-303000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-303000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/SoftStart (4.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (36.464592ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-303000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000: exit status 6 (150.137267ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:03:34.561150    4288 status.go:415] kubeconfig endpoint: extract IP: "functional-303000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-303000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/KubeContext (0.19s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-303000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-303000 get po -A: exit status 1 (35.305923ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-303000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-303000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-303000\n"*: args "kubectl --context functional-303000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-303000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000: exit status 6 (150.355115ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:03:34.747136    4296 status.go:415] kubeconfig endpoint: extract IP: "functional-303000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-303000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh sudo crictl images: exit status 1 (2.144086642s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1122: failed to get images by "out/minikube-darwin-amd64 -p functional-303000 ssh sudo crictl images" ssh exit status 1
functional_test.go:1126: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (5.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (2.144249842s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (2.147662236s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1161: expected "out/minikube-darwin-amd64 -p functional-303000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (5.34s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 kubectl -- --context functional-303000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 kubectl -- --context functional-303000 get pods: exit status 1 (516.121864ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-303000
	* no server found for cluster "functional-303000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-amd64 -p functional-303000 kubectl -- --context functional-303000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000: exit status 6 (150.738415ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:03:48.360858    4424 status.go:415] kubeconfig endpoint: extract IP: "functional-303000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-303000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-303000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-303000 get pods: exit status 1 (768.691999ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-303000
	* no server found for cluster "functional-303000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-303000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-303000 -n functional-303000: exit status 6 (151.601099ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:03:49.281969    4435 status.go:415] kubeconfig endpoint: extract IP: "functional-303000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "functional-303000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (15.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-675000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-675000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (15.099864499s)

                                                
                                                
-- stdout --
	* [multinode-675000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node multinode-675000 in cluster multinode-675000
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:13:58.335073    6142 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:13:58.335258    6142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:13:58.335265    6142 out.go:309] Setting ErrFile to fd 2...
	I1212 12:13:58.335269    6142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:13:58.335484    6142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:13:58.337060    6142 out.go:303] Setting JSON to false
	I1212 12:13:58.360167    6142 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2609,"bootTime":1702409429,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 12:13:58.360283    6142 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 12:13:58.381498    6142 out.go:177] * [multinode-675000] minikube v1.32.0 on Darwin 14.2
	I1212 12:13:58.447010    6142 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 12:13:58.425173    6142 notify.go:220] Checking for updates...
	I1212 12:13:58.489079    6142 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:13:58.530974    6142 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 12:13:58.573098    6142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 12:13:58.636014    6142 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:13:58.678157    6142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 12:13:58.699687    6142 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 12:13:58.730917    6142 out.go:177] * Using the hyperkit driver based on user configuration
	I1212 12:13:58.752055    6142 start.go:298] selected driver: hyperkit
	I1212 12:13:58.752082    6142 start.go:902] validating driver "hyperkit" against <nil>
	I1212 12:13:58.752103    6142 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 12:13:58.756114    6142 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:13:58.756228    6142 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17734-1975/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 12:13:58.764135    6142 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 12:13:58.768176    6142 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:13:58.768212    6142 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 12:13:58.768251    6142 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 12:13:58.768462    6142 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 12:13:58.768524    6142 cni.go:84] Creating CNI manager for ""
	I1212 12:13:58.768533    6142 cni.go:136] 0 nodes found, recommending kindnet
	I1212 12:13:58.768540    6142 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 12:13:58.768550    6142 start_flags.go:323] config:
	{Name:multinode-675000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:13:58.768694    6142 iso.go:125] acquiring lock: {Name:mkd640d41cda61c79a7d2c2e38355d745b556a2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:13:58.811984    6142 out.go:177] * Starting control plane node multinode-675000 in cluster multinode-675000
	I1212 12:13:58.833123    6142 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 12:13:58.833252    6142 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 12:13:58.833279    6142 cache.go:56] Caching tarball of preloaded images
	I1212 12:13:58.833488    6142 preload.go:174] Found /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 12:13:58.833511    6142 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 12:13:58.834055    6142 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/config.json ...
	I1212 12:13:58.834107    6142 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/config.json: {Name:mkceb0fb48e3f5439c744539227c5d46e7657e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:13:58.834718    6142 start.go:365] acquiring machines lock for multinode-675000: {Name:mkcfb9a2794178bbcff953e64f7f6a3e3b1e9997 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 12:13:58.834820    6142 start.go:369] acquired machines lock for "multinode-675000" in 78.185µs
	I1212 12:13:58.834860    6142 start.go:93] Provisioning new machine with config: &{Name:multinode-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 12:13:58.834975    6142 start.go:125] createHost starting for "" (driver="hyperkit")
	I1212 12:13:58.878130    6142 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 12:13:58.878532    6142 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:13:58.878608    6142 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:13:58.888578    6142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51194
	I1212 12:13:58.888940    6142 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:13:58.889389    6142 main.go:141] libmachine: Using API Version  1
	I1212 12:13:58.889401    6142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:13:58.889642    6142 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:13:58.889755    6142 main.go:141] libmachine: (multinode-675000) Calling .GetMachineName
	I1212 12:13:58.889833    6142 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:13:58.889927    6142 start.go:159] libmachine.API.Create for "multinode-675000" (driver="hyperkit")
	I1212 12:13:58.889949    6142 client.go:168] LocalClient.Create starting
	I1212 12:13:58.889982    6142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem
	I1212 12:13:58.890034    6142 main.go:141] libmachine: Decoding PEM data...
	I1212 12:13:58.890049    6142 main.go:141] libmachine: Parsing certificate...
	I1212 12:13:58.890119    6142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem
	I1212 12:13:58.890156    6142 main.go:141] libmachine: Decoding PEM data...
	I1212 12:13:58.890168    6142 main.go:141] libmachine: Parsing certificate...
	I1212 12:13:58.890181    6142 main.go:141] libmachine: Running pre-create checks...
	I1212 12:13:58.890191    6142 main.go:141] libmachine: (multinode-675000) Calling .PreCreateCheck
	I1212 12:13:58.890270    6142 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:13:58.890455    6142 main.go:141] libmachine: (multinode-675000) Calling .GetConfigRaw
	I1212 12:13:58.890898    6142 main.go:141] libmachine: Creating machine...
	I1212 12:13:58.890908    6142 main.go:141] libmachine: (multinode-675000) Calling .Create
	I1212 12:13:58.890980    6142 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:13:58.891137    6142 main.go:141] libmachine: (multinode-675000) DBG | I1212 12:13:58.890971    6150 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:13:58.891193    6142 main.go:141] libmachine: (multinode-675000) Downloading /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17734-1975/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 12:13:59.052357    6142 main.go:141] libmachine: (multinode-675000) DBG | I1212 12:13:59.052298    6150 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa...
	I1212 12:13:59.110577    6142 main.go:141] libmachine: (multinode-675000) DBG | I1212 12:13:59.110509    6150 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/multinode-675000.rawdisk...
	I1212 12:13:59.110595    6142 main.go:141] libmachine: (multinode-675000) DBG | Writing magic tar header
	I1212 12:13:59.110631    6142 main.go:141] libmachine: (multinode-675000) DBG | Writing SSH key tar header
	I1212 12:13:59.111313    6142 main.go:141] libmachine: (multinode-675000) DBG | I1212 12:13:59.111269    6150 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000 ...
	I1212 12:13:59.440566    6142 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:13:59.440628    6142 main.go:141] libmachine: (multinode-675000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/hyperkit.pid
	I1212 12:13:59.440641    6142 main.go:141] libmachine: (multinode-675000) DBG | Using UUID fbe44634-992a-11ee-b1fb-f01898ef957c
	I1212 12:13:59.576714    6142 main.go:141] libmachine: (multinode-675000) DBG | Generated MAC 6:ed:17:4f:83:b2
	I1212 12:13:59.576738    6142 main.go:141] libmachine: (multinode-675000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000
	I1212 12:13:59.576772    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fbe44634-992a-11ee-b1fb-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001823c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 12:13:59.576808    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fbe44634-992a-11ee-b1fb-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001823c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 12:13:59.576857    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fbe44634-992a-11ee-b1fb-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/multinode-675000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/tty,log=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage,/Users/jenkins/minikube-integration/1773
4-1975/.minikube/machines/multinode-675000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000"}
	I1212 12:13:59.576975    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fbe44634-992a-11ee-b1fb-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/multinode-675000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/tty,log=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/console-ring -f kexec,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000"
	I1212 12:13:59.577010    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1212 12:13:59.580760    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 DEBUG: hyperkit: Pid is 6153
	I1212 12:13:59.581197    6142 main.go:141] libmachine: (multinode-675000) DBG | Attempt 0
	I1212 12:13:59.581225    6142 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:13:59.581302    6142 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6153
	I1212 12:13:59.582391    6142 main.go:141] libmachine: (multinode-675000) DBG | Searching for 6:ed:17:4f:83:b2 in /var/db/dhcpd_leases ...
	I1212 12:13:59.582485    6142 main.go:141] libmachine: (multinode-675000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I1212 12:13:59.582527    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:13:59.582553    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:13:59.582592    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:13:59.582620    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:13:59.582632    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:13:59.582648    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:13:59.582664    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:13:59.582675    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:13:59.582683    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:13:59.582697    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:13:59.582714    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:13:59.587860    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1212 12:13:59.650667    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1212 12:13:59.651601    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 12:13:59.651632    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 12:13:59.651649    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 12:13:59.651667    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:13:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 12:14:00.030029    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1212 12:14:00.030048    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1212 12:14:00.134089    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 12:14:00.134108    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 12:14:00.134127    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 12:14:00.134141    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 12:14:00.135033    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1212 12:14:00.135045    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1212 12:14:01.583051    6142 main.go:141] libmachine: (multinode-675000) DBG | Attempt 1
	I1212 12:14:01.583072    6142 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:14:01.583162    6142 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6153
	I1212 12:14:01.584071    6142 main.go:141] libmachine: (multinode-675000) DBG | Searching for 6:ed:17:4f:83:b2 in /var/db/dhcpd_leases ...
	I1212 12:14:01.584143    6142 main.go:141] libmachine: (multinode-675000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I1212 12:14:01.584154    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:14:01.584169    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:14:01.584179    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:14:01.584186    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:14:01.584198    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:14:01.584218    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:14:01.584256    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:14:01.584283    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:14:01.584293    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:14:01.584303    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:14:01.584311    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:14:03.584243    6142 main.go:141] libmachine: (multinode-675000) DBG | Attempt 2
	I1212 12:14:03.584259    6142 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:14:03.584337    6142 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6153
	I1212 12:14:03.585480    6142 main.go:141] libmachine: (multinode-675000) DBG | Searching for 6:ed:17:4f:83:b2 in /var/db/dhcpd_leases ...
	I1212 12:14:03.585520    6142 main.go:141] libmachine: (multinode-675000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I1212 12:14:03.585533    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:14:03.585551    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:14:03.585566    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:14:03.585575    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:14:03.585584    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:14:03.585592    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:14:03.585650    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:14:03.585664    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:14:03.585686    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:14:03.585695    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:14:03.585701    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:14:05.130231    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1212 12:14:05.130248    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1212 12:14:05.130269    6142 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:14:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1212 12:14:05.586866    6142 main.go:141] libmachine: (multinode-675000) DBG | Attempt 3
	I1212 12:14:05.586881    6142 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:14:05.587013    6142 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6153
	I1212 12:14:05.587965    6142 main.go:141] libmachine: (multinode-675000) DBG | Searching for 6:ed:17:4f:83:b2 in /var/db/dhcpd_leases ...
	I1212 12:14:05.588007    6142 main.go:141] libmachine: (multinode-675000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I1212 12:14:05.588020    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:14:05.588033    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:14:05.588042    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:14:05.588057    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:14:05.588071    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:14:05.588079    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:14:05.588092    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:14:05.588101    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:14:05.588109    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:14:05.588116    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:14:05.588125    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:14:07.588616    6142 main.go:141] libmachine: (multinode-675000) DBG | Attempt 4
	I1212 12:14:07.588634    6142 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:14:07.588718    6142 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6153
	I1212 12:14:07.589641    6142 main.go:141] libmachine: (multinode-675000) DBG | Searching for 6:ed:17:4f:83:b2 in /var/db/dhcpd_leases ...
	I1212 12:14:07.589705    6142 main.go:141] libmachine: (multinode-675000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I1212 12:14:07.589737    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:14:07.589746    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:14:07.589756    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:14:07.589770    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:14:07.589780    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:14:07.589789    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:14:07.589800    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:14:07.589809    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:14:07.589818    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:14:07.589827    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:14:07.589837    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:14:09.590218    6142 main.go:141] libmachine: (multinode-675000) DBG | Attempt 5
	I1212 12:14:09.590249    6142 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:14:09.590362    6142 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6153
	I1212 12:14:09.591898    6142 main.go:141] libmachine: (multinode-675000) DBG | Searching for 6:ed:17:4f:83:b2 in /var/db/dhcpd_leases ...
	I1212 12:14:09.591964    6142 main.go:141] libmachine: (multinode-675000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I1212 12:14:09.591986    6142 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a108f}
	I1212 12:14:09.592023    6142 main.go:141] libmachine: (multinode-675000) DBG | Found match: 6:ed:17:4f:83:b2
	I1212 12:14:09.592044    6142 main.go:141] libmachine: (multinode-675000) DBG | IP: 192.169.0.13
	I1212 12:14:09.592097    6142 main.go:141] libmachine: (multinode-675000) Calling .GetConfigRaw
	I1212 12:14:09.592894    6142 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:14:09.593049    6142 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:14:09.593173    6142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 12:14:09.593187    6142 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:14:09.593352    6142 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:14:09.593395    6142 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6153
	I1212 12:14:09.594474    6142 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 12:14:09.594486    6142 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 12:14:09.594491    6142 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 12:14:09.594499    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:09.594599    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:09.594701    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:09.594804    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:09.594906    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:09.595047    6142 main.go:141] libmachine: Using SSH client type: native
	I1212 12:14:09.595343    6142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:14:09.595351    6142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 12:14:09.656779    6142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 12:14:09.656794    6142 main.go:141] libmachine: Detecting the provisioner...
	I1212 12:14:09.656807    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:09.656957    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:09.657066    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:09.657164    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:09.657275    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:09.657403    6142 main.go:141] libmachine: Using SSH client type: native
	I1212 12:14:09.657807    6142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:14:09.657819    6142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 12:14:09.720438    6142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 12:14:09.720502    6142 main.go:141] libmachine: found compatible host: buildroot
	I1212 12:14:09.720509    6142 main.go:141] libmachine: Provisioning with buildroot...
	I1212 12:14:09.720522    6142 main.go:141] libmachine: (multinode-675000) Calling .GetMachineName
	I1212 12:14:09.720665    6142 buildroot.go:166] provisioning hostname "multinode-675000"
	I1212 12:14:09.720674    6142 main.go:141] libmachine: (multinode-675000) Calling .GetMachineName
	I1212 12:14:09.720769    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:09.720875    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:09.720962    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:09.721099    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:09.721199    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:09.721336    6142 main.go:141] libmachine: Using SSH client type: native
	I1212 12:14:09.721582    6142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:14:09.721591    6142 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-675000 && echo "multinode-675000" | sudo tee /etc/hostname
	I1212 12:14:09.792415    6142 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-675000
	
	I1212 12:14:09.792438    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:09.792584    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:09.792690    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:09.792792    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:09.792884    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:09.793018    6142 main.go:141] libmachine: Using SSH client type: native
	I1212 12:14:09.793270    6142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:14:09.793283    6142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-675000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-675000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-675000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 12:14:09.860190    6142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 12:14:09.860210    6142 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17734-1975/.minikube CaCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17734-1975/.minikube}
	I1212 12:14:09.860226    6142 buildroot.go:174] setting up certificates
	I1212 12:14:09.860237    6142 provision.go:83] configureAuth start
	I1212 12:14:09.860245    6142 main.go:141] libmachine: (multinode-675000) Calling .GetMachineName
	I1212 12:14:09.860380    6142 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:14:09.860483    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:09.860569    6142 provision.go:138] copyHostCerts
	I1212 12:14:09.860596    6142 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem
	I1212 12:14:09.860640    6142 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem, removing ...
	I1212 12:14:09.860648    6142 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem
	I1212 12:14:09.860765    6142 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem (1078 bytes)
	I1212 12:14:09.860960    6142 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem
	I1212 12:14:09.860987    6142 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem, removing ...
	I1212 12:14:09.860992    6142 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem
	I1212 12:14:09.861073    6142 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem (1123 bytes)
	I1212 12:14:09.861223    6142 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem
	I1212 12:14:09.861259    6142 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem, removing ...
	I1212 12:14:09.861264    6142 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem
	I1212 12:14:09.861345    6142 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem (1675 bytes)
	I1212 12:14:09.861498    6142 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem org=jenkins.multinode-675000 san=[192.169.0.13 192.169.0.13 localhost 127.0.0.1 minikube multinode-675000]
	I1212 12:14:09.919304    6142 provision.go:172] copyRemoteCerts
	I1212 12:14:09.919373    6142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 12:14:09.919392    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:09.919538    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:09.919642    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:09.919724    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:09.919794    6142 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:14:09.955795    6142 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 12:14:09.955867    6142 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 12:14:09.971667    6142 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 12:14:09.971727    6142 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 12:14:09.987158    6142 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 12:14:09.987209    6142 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 12:14:10.004963    6142 provision.go:86] duration metric: configureAuth took 144.718074ms
	I1212 12:14:10.004978    6142 buildroot.go:189] setting minikube options for container-runtime
	I1212 12:14:10.005121    6142 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:14:10.005136    6142 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:14:10.005341    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:10.005428    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:10.005576    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.005673    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.005777    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:10.005894    6142 main.go:141] libmachine: Using SSH client type: native
	I1212 12:14:10.006135    6142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:14:10.006144    6142 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 12:14:10.068319    6142 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 12:14:10.068332    6142 buildroot.go:70] root file system type: tmpfs
	I1212 12:14:10.068409    6142 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 12:14:10.068423    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:10.068560    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:10.068663    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.068754    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.068848    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:10.068978    6142 main.go:141] libmachine: Using SSH client type: native
	I1212 12:14:10.069224    6142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:14:10.069276    6142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 12:14:10.137571    6142 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 12:14:10.137591    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:10.137728    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:10.137825    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.137928    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.138033    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:10.138168    6142 main.go:141] libmachine: Using SSH client type: native
	I1212 12:14:10.138421    6142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:14:10.138437    6142 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 12:14:10.656154    6142 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 12:14:10.656170    6142 main.go:141] libmachine: Checking connection to Docker...
	I1212 12:14:10.656177    6142 main.go:141] libmachine: (multinode-675000) Calling .GetURL
	I1212 12:14:10.656336    6142 main.go:141] libmachine: Docker is up and running!
	I1212 12:14:10.656345    6142 main.go:141] libmachine: Reticulating splines...
	I1212 12:14:10.656354    6142 client.go:171] LocalClient.Create took 11.766805705s
	I1212 12:14:10.656368    6142 start.go:167] duration metric: libmachine.API.Create for "multinode-675000" took 11.766853198s
	I1212 12:14:10.656375    6142 start.go:300] post-start starting for "multinode-675000" (driver="hyperkit")
	I1212 12:14:10.656384    6142 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 12:14:10.656396    6142 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:14:10.656534    6142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 12:14:10.656551    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:10.656651    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:10.656756    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.656853    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:10.656942    6142 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:14:10.695291    6142 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 12:14:10.698199    6142 command_runner.go:130] > NAME=Buildroot
	I1212 12:14:10.698210    6142 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 12:14:10.698214    6142 command_runner.go:130] > ID=buildroot
	I1212 12:14:10.698218    6142 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 12:14:10.698222    6142 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 12:14:10.698343    6142 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 12:14:10.698371    6142 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/addons for local assets ...
	I1212 12:14:10.698464    6142 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/files for local assets ...
	I1212 12:14:10.698649    6142 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> 31982.pem in /etc/ssl/certs
	I1212 12:14:10.698655    6142 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> /etc/ssl/certs/31982.pem
	I1212 12:14:10.698863    6142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 12:14:10.705559    6142 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem --> /etc/ssl/certs/31982.pem (1708 bytes)
	I1212 12:14:10.721588    6142 start.go:303] post-start completed in 65.207257ms
	I1212 12:14:10.721619    6142 main.go:141] libmachine: (multinode-675000) Calling .GetConfigRaw
	I1212 12:14:10.722220    6142 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:14:10.722376    6142 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/config.json ...
	I1212 12:14:10.722711    6142 start.go:128] duration metric: createHost completed in 11.888133933s
	I1212 12:14:10.722728    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:10.722813    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:10.722905    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.722974    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.723050    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:10.723160    6142 main.go:141] libmachine: Using SSH client type: native
	I1212 12:14:10.723395    6142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:14:10.723405    6142 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 12:14:10.783600    6142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702412050.727095052
	
	I1212 12:14:10.783612    6142 fix.go:206] guest clock: 1702412050.727095052
	I1212 12:14:10.783617    6142 fix.go:219] Guest: 2023-12-12 12:14:10.727095052 -0800 PST Remote: 2023-12-12 12:14:10.72272 -0800 PST m=+12.432786248 (delta=4.375052ms)
	I1212 12:14:10.783640    6142 fix.go:190] guest clock delta is within tolerance: 4.375052ms
	I1212 12:14:10.783644    6142 start.go:83] releasing machines lock for "multinode-675000", held for 11.949232142s
	I1212 12:14:10.783663    6142 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:14:10.783795    6142 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:14:10.783896    6142 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:14:10.784289    6142 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:14:10.784401    6142 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:14:10.784478    6142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 12:14:10.784515    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:10.784556    6142 ssh_runner.go:195] Run: cat /version.json
	I1212 12:14:10.784568    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:14:10.784605    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:10.784678    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:14:10.784684    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.784791    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:14:10.784813    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:10.784904    6142 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:14:10.784914    6142 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:14:10.785000    6142 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:14:10.818296    6142 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 12:14:10.818645    6142 ssh_runner.go:195] Run: systemctl --version
	I1212 12:14:10.867258    6142 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 12:14:10.867361    6142 command_runner.go:130] > systemd 247 (247)
	I1212 12:14:10.867401    6142 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 12:14:10.867509    6142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 12:14:10.871469    6142 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 12:14:10.871594    6142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 12:14:10.871643    6142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 12:14:10.881894    6142 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 12:14:10.881913    6142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 12:14:10.881924    6142 start.go:475] detecting cgroup driver to use...
	I1212 12:14:10.882045    6142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:14:10.895216    6142 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 12:14:10.895559    6142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 12:14:10.902716    6142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 12:14:10.909786    6142 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 12:14:10.909835    6142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 12:14:10.916287    6142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:14:10.922687    6142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 12:14:10.929061    6142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:14:10.935496    6142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 12:14:10.942061    6142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 12:14:10.949831    6142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 12:14:10.956641    6142 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 12:14:10.956696    6142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 12:14:10.962757    6142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:14:11.051943    6142 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 12:14:11.064247    6142 start.go:475] detecting cgroup driver to use...
	I1212 12:14:11.064325    6142 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 12:14:11.076723    6142 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 12:14:11.076953    6142 command_runner.go:130] > [Unit]
	I1212 12:14:11.076962    6142 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 12:14:11.076967    6142 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 12:14:11.076979    6142 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 12:14:11.076985    6142 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 12:14:11.076992    6142 command_runner.go:130] > StartLimitBurst=3
	I1212 12:14:11.076996    6142 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 12:14:11.077005    6142 command_runner.go:130] > [Service]
	I1212 12:14:11.077008    6142 command_runner.go:130] > Type=notify
	I1212 12:14:11.077012    6142 command_runner.go:130] > Restart=on-failure
	I1212 12:14:11.077018    6142 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 12:14:11.077027    6142 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 12:14:11.077033    6142 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 12:14:11.077040    6142 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 12:14:11.077046    6142 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 12:14:11.077053    6142 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 12:14:11.077061    6142 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 12:14:11.077069    6142 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 12:14:11.077075    6142 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 12:14:11.077079    6142 command_runner.go:130] > ExecStart=
	I1212 12:14:11.077095    6142 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I1212 12:14:11.077100    6142 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 12:14:11.077107    6142 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 12:14:11.077113    6142 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 12:14:11.077117    6142 command_runner.go:130] > LimitNOFILE=infinity
	I1212 12:14:11.077121    6142 command_runner.go:130] > LimitNPROC=infinity
	I1212 12:14:11.077124    6142 command_runner.go:130] > LimitCORE=infinity
	I1212 12:14:11.077129    6142 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 12:14:11.077134    6142 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 12:14:11.077138    6142 command_runner.go:130] > TasksMax=infinity
	I1212 12:14:11.077142    6142 command_runner.go:130] > TimeoutStartSec=0
	I1212 12:14:11.077151    6142 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 12:14:11.077155    6142 command_runner.go:130] > Delegate=yes
	I1212 12:14:11.077163    6142 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 12:14:11.077167    6142 command_runner.go:130] > KillMode=process
	I1212 12:14:11.077171    6142 command_runner.go:130] > [Install]
	I1212 12:14:11.077180    6142 command_runner.go:130] > WantedBy=multi-user.target
	I1212 12:14:11.077500    6142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:14:11.089700    6142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 12:14:11.105535    6142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:14:11.114175    6142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:14:11.122275    6142 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 12:14:11.163759    6142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:14:11.172906    6142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:14:11.184568    6142 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 12:14:11.184895    6142 ssh_runner.go:195] Run: which cri-dockerd
	I1212 12:14:11.187013    6142 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 12:14:11.187262    6142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 12:14:11.193691    6142 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 12:14:11.205613    6142 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 12:14:11.300448    6142 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 12:14:11.397786    6142 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 12:14:11.397858    6142 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 12:14:11.410613    6142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:14:11.509220    6142 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 12:14:12.850335    6142 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.341138393s)
	I1212 12:14:12.850387    6142 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:14:12.943246    6142 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 12:14:13.033585    6142 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:14:13.119338    6142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:14:13.205721    6142 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 12:14:13.215715    6142 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1212 12:14:13.215831    6142 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1212 12:14:13.223968    6142 command_runner.go:130] > -- Journal begins at Tue 2023-12-12 20:14:07 UTC, ends at Tue 2023-12-12 20:14:13 UTC. --
	I1212 12:14:13.223984    6142 command_runner.go:130] > Dec 12 20:14:08 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1212 12:14:13.223990    6142 command_runner.go:130] > Dec 12 20:14:08 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 12:14:13.223995    6142 command_runner.go:130] > Dec 12 20:14:10 multinode-675000 systemd[1]: cri-docker.socket: Succeeded.
	I1212 12:14:13.224001    6142 command_runner.go:130] > Dec 12 20:14:10 multinode-675000 systemd[1]: Closed CRI Docker Socket for the API.
	I1212 12:14:13.224006    6142 command_runner.go:130] > Dec 12 20:14:10 multinode-675000 systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 12:14:13.224024    6142 command_runner.go:130] > Dec 12 20:14:10 multinode-675000 systemd[1]: Starting CRI Docker Socket for the API.
	I1212 12:14:13.224031    6142 command_runner.go:130] > Dec 12 20:14:10 multinode-675000 systemd[1]: Listening on CRI Docker Socket for the API.
	I1212 12:14:13.224036    6142 command_runner.go:130] > Dec 12 20:14:13 multinode-675000 systemd[1]: cri-docker.socket: Succeeded.
	I1212 12:14:13.224041    6142 command_runner.go:130] > Dec 12 20:14:13 multinode-675000 systemd[1]: Closed CRI Docker Socket for the API.
	I1212 12:14:13.224047    6142 command_runner.go:130] > Dec 12 20:14:13 multinode-675000 systemd[1]: Stopping CRI Docker Socket for the API.
	I1212 12:14:13.224070    6142 command_runner.go:130] > Dec 12 20:14:13 multinode-675000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	I1212 12:14:13.224077    6142 command_runner.go:130] > Dec 12 20:14:13 multinode-675000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	I1212 12:14:13.257831    6142 out.go:177] 
	W1212 12:14:13.279942    6142 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 20:14:07 UTC, ends at Tue 2023-12-12 20:14:13 UTC. --
	Dec 12 20:14:08 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:14:08 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:14:10 multinode-675000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:14:10 multinode-675000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:14:10 multinode-675000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:14:10 multinode-675000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:14:10 multinode-675000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:14:13 multinode-675000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:14:13 multinode-675000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:14:13 multinode-675000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:14:13 multinode-675000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 20:14:13 multinode-675000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 20:14:07 UTC, ends at Tue 2023-12-12 20:14:13 UTC. --
	Dec 12 20:14:08 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:14:08 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:14:10 multinode-675000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:14:10 multinode-675000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:14:10 multinode-675000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:14:10 multinode-675000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:14:10 multinode-675000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:14:13 multinode-675000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:14:13 multinode-675000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:14:13 multinode-675000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:14:13 multinode-675000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 20:14:13 multinode-675000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1212 12:14:13.279972    6142 out.go:239] * 
	* 
	W1212 12:14:13.281131    6142 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 12:14:13.335690    6142 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-675000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000: exit status 6 (148.139835ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:14:13.532245    6167 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-675000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (15.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (119.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (93.18337ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-675000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- rollout status deployment/busybox: exit status 1 (93.415612ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.515627ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.517475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.594364ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1212 12:14:17.012043    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:14:17.018515    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:14:17.029083    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:14:17.051288    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:14:17.091738    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:14:17.173230    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:14:17.334569    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:14:17.655429    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.020455ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1212 12:14:18.297692    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:14:19.577945    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.916891ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1212 12:14:22.140131    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.886693ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1212 12:14:27.260230    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:14:37.500966    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.351862ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.674079ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1212 12:14:57.980974    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.024037ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1212 12:15:17.103884    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:15:19.607101    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:15:38.940550    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.665819ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1212 12:15:47.294625    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.050189ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (93.120212ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- exec  -- nslookup kubernetes.io: exit status 1 (93.255462ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- exec  -- nslookup kubernetes.default: exit status 1 (94.349528ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (95.693962ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000: exit status 6 (148.070906ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:16:13.284935    6340 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-675000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (119.66s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-675000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (93.971503ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000: exit status 6 (150.54109ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:16:13.529654    6348 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-675000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-675000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-675000 -v 3 --alsologtostderr: exit status 119 (236.901682ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-675000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:16:13.594334    6353 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:16:13.594615    6353 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:16:13.594621    6353 out.go:309] Setting ErrFile to fd 2...
	I1212 12:16:13.594625    6353 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:16:13.594829    6353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:16:13.595154    6353 mustload.go:65] Loading cluster: multinode-675000
	I1212 12:16:13.595445    6353 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:16:13.595853    6353 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:16:13.595898    6353 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:16:13.603778    6353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51241
	I1212 12:16:13.604171    6353 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:16:13.604606    6353 main.go:141] libmachine: Using API Version  1
	I1212 12:16:13.604634    6353 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:16:13.604878    6353 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:16:13.604988    6353 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:16:13.605084    6353 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:16:13.605131    6353 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6153
	I1212 12:16:13.606180    6353 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:16:13.606414    6353 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:16:13.606436    6353 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:16:13.614235    6353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51243
	I1212 12:16:13.614588    6353 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:16:13.614938    6353 main.go:141] libmachine: Using API Version  1
	I1212 12:16:13.614957    6353 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:16:13.615150    6353 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:16:13.615254    6353 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:13.615347    6353 api_server.go:166] Checking apiserver status ...
	I1212 12:16:13.615406    6353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:16:13.615429    6353 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:13.615514    6353 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:13.615594    6353 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:13.615668    6353 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:13.615748    6353 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	W1212 12:16:13.652311    6353 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:16:13.709428    6353 out.go:177] * This control plane is not running! (state=Stopped)
	W1212 12:16:13.732185    6353 out.go:239] ! This is unusual - you may want to investigate using "minikube logs -p multinode-675000"
	! This is unusual - you may want to investigate using "minikube logs -p multinode-675000"
	I1212 12:16:13.753183    6353 out.go:177]   To start a cluster, run: "minikube start -p multinode-675000"

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-675000 -v 3 --alsologtostderr" : exit status 119
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000: exit status 6 (145.051354ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:16:13.912337    6357 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-675000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/AddNode (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-675000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-675000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (36.873911ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-675000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-675000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-675000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000: exit status 6 (144.555927ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:16:14.094080    6363 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-675000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:156: expected profile "multinode-675000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-675000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-675000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMH
idden\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-675000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":
\"\",\"IP\":\"192.169.0.13\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000: exit status 6 (148.048769ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:16:14.424008    6375 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-675000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-675000 status --output json --alsologtostderr: exit status 6 (149.101638ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-675000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:16:14.490632    6380 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:16:14.490932    6380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:16:14.490939    6380 out.go:309] Setting ErrFile to fd 2...
	I1212 12:16:14.490943    6380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:16:14.491137    6380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:16:14.491329    6380 out.go:303] Setting JSON to true
	I1212 12:16:14.491352    6380 mustload.go:65] Loading cluster: multinode-675000
	I1212 12:16:14.491392    6380 notify.go:220] Checking for updates...
	I1212 12:16:14.491689    6380 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:16:14.491701    6380 status.go:255] checking status of multinode-675000 ...
	I1212 12:16:14.492148    6380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:16:14.492235    6380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:16:14.500843    6380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51275
	I1212 12:16:14.501347    6380 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:16:14.501823    6380 main.go:141] libmachine: Using API Version  1
	I1212 12:16:14.501857    6380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:16:14.502057    6380 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:16:14.502152    6380 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:16:14.502270    6380 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:16:14.502351    6380 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6153
	I1212 12:16:14.503438    6380 status.go:330] multinode-675000 host status = "Running" (err=<nil>)
	I1212 12:16:14.503454    6380 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:16:14.503685    6380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:16:14.503708    6380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:16:14.512026    6380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51277
	I1212 12:16:14.512366    6380 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:16:14.512701    6380 main.go:141] libmachine: Using API Version  1
	I1212 12:16:14.512719    6380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:16:14.512958    6380 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:16:14.513067    6380 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:16:14.513154    6380 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:16:14.513389    6380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:16:14.513424    6380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:16:14.524654    6380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51279
	I1212 12:16:14.525089    6380 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:16:14.525500    6380 main.go:141] libmachine: Using API Version  1
	I1212 12:16:14.525511    6380 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:16:14.525738    6380 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:16:14.525863    6380 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:14.526014    6380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 12:16:14.526038    6380 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:14.526130    6380 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:14.526206    6380 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:14.526276    6380 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:14.526362    6380 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:16:14.560601    6380 ssh_runner.go:195] Run: systemctl --version
	I1212 12:16:14.564286    6380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1212 12:16:14.573608    6380 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:16:14.573633    6380 api_server.go:166] Checking apiserver status ...
	I1212 12:16:14.573698    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:16:14.581689    6380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:16:14.581701    6380 status.go:421] multinode-675000 apiserver status = Stopped (err=<nil>)
	I1212 12:16:14.581710    6380 status.go:257] multinode-675000 status: &{Name:multinode-675000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:176: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-675000 status --output json --alsologtostderr" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000: exit status 6 (144.762905ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:16:14.719102    6385 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-675000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/CopyFile (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-675000 node stop m03: exit status 85 (146.621813ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-675000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-675000 status: exit status 6 (146.05865ms)

                                                
                                                
-- stdout --
	multinode-675000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:16:15.011781    6392 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
multinode_test.go:247: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-675000 status" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000: exit status 6 (147.877397ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:16:15.159594    6397 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-675000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StopNode (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-675000 node start m03 --alsologtostderr: exit status 85 (150.213443ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:16:15.226074    6402 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:16:15.226471    6402 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:16:15.226477    6402 out.go:309] Setting ErrFile to fd 2...
	I1212 12:16:15.226481    6402 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:16:15.226655    6402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:16:15.227011    6402 mustload.go:65] Loading cluster: multinode-675000
	I1212 12:16:15.227356    6402 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:16:15.249494    6402 out.go:177] 
	W1212 12:16:15.271219    6402 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1212 12:16:15.271244    6402 out.go:239] * 
	* 
	W1212 12:16:15.275008    6402 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 12:16:15.296150    6402 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1212 12:16:15.226074    6402 out.go:296] Setting OutFile to fd 1 ...
I1212 12:16:15.226471    6402 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:16:15.226477    6402 out.go:309] Setting ErrFile to fd 2...
I1212 12:16:15.226481    6402 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:16:15.226655    6402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
I1212 12:16:15.227011    6402 mustload.go:65] Loading cluster: multinode-675000
I1212 12:16:15.227356    6402 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:16:15.249494    6402 out.go:177] 
W1212 12:16:15.271219    6402 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1212 12:16:15.271244    6402 out.go:239] * 
* 
W1212 12:16:15.275008    6402 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1212 12:16:15.296150    6402 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-675000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-675000 status: exit status 6 (145.199567ms)

                                                
                                                
-- stdout --
	multinode-675000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:16:15.455993    6404 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-675000 status" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000: exit status 6 (145.70715ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 12:16:15.601815    6409 status.go:415] kubeconfig endpoint: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-675000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-675000 node delete m03: exit status 80 (252.992943ms)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster multinode-675000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_DELETE: deleting node: retrieve: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-675000 node delete m03": exit status 80
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 status --alsologtostderr
multinode_test.go:434: status says both hosts are not running: args "out/minikube-darwin-amd64 -p multinode-675000 status --alsologtostderr": multinode-675000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode_test.go:438: status says both kubelets are not running: args "out/minikube-darwin-amd64 -p multinode-675000 status --alsologtostderr": multinode-675000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:465: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-675000 logs -n 25: (2.501294545s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-675000 -- rollout       | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | status deployment/busybox            |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:15 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:15 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- exec          | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | -- nslookup kubernetes.io            |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- exec          | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | -- nslookup kubernetes.default       |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000                  | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | -- exec  -- nslookup                 |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| node    | add -p multinode-675000 -v 3         | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-675000 node stop m03       | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	| node    | multinode-675000 node start          | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | m03 --alsologtostderr                |                  |         |         |                     |                     |
	| node    | list -p multinode-675000             | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	| stop    | -p multinode-675000                  | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST | 12 Dec 23 12:16 PST |
	| start   | -p multinode-675000                  | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST | 12 Dec 23 12:17 PST |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | list -p multinode-675000             | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:17 PST |                     |
	| node    | multinode-675000 node delete         | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:17 PST |                     |
	|         | m03                                  |                  |         |         |                     |                     |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 12:16:17
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 12:16:17.958960    6422 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:16:17.959174    6422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:16:17.959178    6422 out.go:309] Setting ErrFile to fd 2...
	I1212 12:16:17.959182    6422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:16:17.959369    6422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:16:17.960846    6422 out.go:303] Setting JSON to false
	I1212 12:16:17.983797    6422 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2748,"bootTime":1702409429,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 12:16:17.983907    6422 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 12:16:18.006425    6422 out.go:177] * [multinode-675000] minikube v1.32.0 on Darwin 14.2
	I1212 12:16:18.069981    6422 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 12:16:18.048927    6422 notify.go:220] Checking for updates...
	I1212 12:16:18.111644    6422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:16:18.153752    6422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 12:16:18.195851    6422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 12:16:18.237606    6422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:16:18.279674    6422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 12:16:18.301412    6422 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:16:18.301581    6422 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 12:16:18.302257    6422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:16:18.302338    6422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:16:18.311688    6422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51325
	I1212 12:16:18.312164    6422 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:16:18.312590    6422 main.go:141] libmachine: Using API Version  1
	I1212 12:16:18.312603    6422 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:16:18.312818    6422 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:16:18.312927    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:18.341981    6422 out.go:177] * Using the hyperkit driver based on existing profile
	I1212 12:16:18.383557    6422 start.go:298] selected driver: hyperkit
	I1212 12:16:18.383578    6422 start.go:902] validating driver "hyperkit" against &{Name:multinode-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:16:18.383708    6422 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 12:16:18.383875    6422 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:16:18.384029    6422 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17734-1975/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 12:16:18.392294    6422 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 12:16:18.396229    6422 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:16:18.396266    6422 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 12:16:18.399090    6422 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 12:16:18.399168    6422 cni.go:84] Creating CNI manager for ""
	I1212 12:16:18.399177    6422 cni.go:136] 1 nodes found, recommending kindnet
	I1212 12:16:18.399186    6422 start_flags.go:323] config:
	{Name:multinode-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-675000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:16:18.399371    6422 iso.go:125] acquiring lock: {Name:mkd640d41cda61c79a7d2c2e38355d745b556a2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:16:18.420844    6422 out.go:177] * Starting control plane node multinode-675000 in cluster multinode-675000
	I1212 12:16:18.441902    6422 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 12:16:18.442003    6422 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 12:16:18.442027    6422 cache.go:56] Caching tarball of preloaded images
	I1212 12:16:18.442197    6422 preload.go:174] Found /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 12:16:18.442219    6422 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 12:16:18.442380    6422 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/config.json ...
	I1212 12:16:18.443216    6422 start.go:365] acquiring machines lock for multinode-675000: {Name:mkcfb9a2794178bbcff953e64f7f6a3e3b1e9997 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 12:16:18.443344    6422 start.go:369] acquired machines lock for "multinode-675000" in 103.607µs
	I1212 12:16:18.443402    6422 start.go:96] Skipping create...Using existing machine configuration
	I1212 12:16:18.443413    6422 fix.go:54] fixHost starting: 
	I1212 12:16:18.443787    6422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:16:18.443813    6422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:16:18.453103    6422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51327
	I1212 12:16:18.453467    6422 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:16:18.453847    6422 main.go:141] libmachine: Using API Version  1
	I1212 12:16:18.453862    6422 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:16:18.454063    6422 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:16:18.454166    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:18.454272    6422 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:16:18.454354    6422 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:16:18.454423    6422 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6153
	I1212 12:16:18.455504    6422 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid 6153 missing from process table
	I1212 12:16:18.455556    6422 fix.go:102] recreateIfNeeded on multinode-675000: state=Stopped err=<nil>
	I1212 12:16:18.455589    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	W1212 12:16:18.455685    6422 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 12:16:18.476794    6422 out.go:177] * Restarting existing hyperkit VM for "multinode-675000" ...
	I1212 12:16:18.497748    6422 main.go:141] libmachine: (multinode-675000) Calling .Start
	I1212 12:16:18.498099    6422 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:16:18.498150    6422 main.go:141] libmachine: (multinode-675000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/hyperkit.pid
	I1212 12:16:18.498198    6422 main.go:141] libmachine: (multinode-675000) DBG | Using UUID fbe44634-992a-11ee-b1fb-f01898ef957c
	I1212 12:16:18.629964    6422 main.go:141] libmachine: (multinode-675000) DBG | Generated MAC 6:ed:17:4f:83:b2
	I1212 12:16:18.630016    6422 main.go:141] libmachine: (multinode-675000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000
	I1212 12:16:18.630184    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fbe44634-992a-11ee-b1fb-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00009fe30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 12:16:18.630218    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fbe44634-992a-11ee-b1fb-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00009fe30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 12:16:18.630309    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fbe44634-992a-11ee-b1fb-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/multinode-675000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/tty,log=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage,/Users/jenkins/minikube-integration/1773
4-1975/.minikube/machines/multinode-675000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000"}
	I1212 12:16:18.630363    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fbe44634-992a-11ee-b1fb-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/multinode-675000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/tty,log=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/console-ring -f kexec,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000"
	I1212 12:16:18.630380    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1212 12:16:18.631979    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 DEBUG: hyperkit: Pid is 6434
	I1212 12:16:18.632386    6422 main.go:141] libmachine: (multinode-675000) DBG | Attempt 0
	I1212 12:16:18.632402    6422 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:16:18.632552    6422 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6434
	I1212 12:16:18.635109    6422 main.go:141] libmachine: (multinode-675000) DBG | Searching for 6:ed:17:4f:83:b2 in /var/db/dhcpd_leases ...
	I1212 12:16:18.635147    6422 main.go:141] libmachine: (multinode-675000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I1212 12:16:18.635166    6422 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x6578bf90}
	I1212 12:16:18.635181    6422 main.go:141] libmachine: (multinode-675000) DBG | Found match: 6:ed:17:4f:83:b2
	I1212 12:16:18.635208    6422 main.go:141] libmachine: (multinode-675000) DBG | IP: 192.169.0.13
	I1212 12:16:18.635263    6422 main.go:141] libmachine: (multinode-675000) Calling .GetConfigRaw
	I1212 12:16:18.635890    6422 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:16:18.636102    6422 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/config.json ...
	I1212 12:16:18.636471    6422 machine.go:88] provisioning docker machine ...
	I1212 12:16:18.636488    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:18.636668    6422 main.go:141] libmachine: (multinode-675000) Calling .GetMachineName
	I1212 12:16:18.636826    6422 buildroot.go:166] provisioning hostname "multinode-675000"
	I1212 12:16:18.636842    6422 main.go:141] libmachine: (multinode-675000) Calling .GetMachineName
	I1212 12:16:18.636989    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:18.637154    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:18.637343    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:18.637484    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:18.637625    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:18.637803    6422 main.go:141] libmachine: Using SSH client type: native
	I1212 12:16:18.638418    6422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:16:18.638429    6422 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-675000 && echo "multinode-675000" | sudo tee /etc/hostname
	I1212 12:16:18.640829    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1212 12:16:18.767206    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1212 12:16:18.767977    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 12:16:18.768017    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 12:16:18.768029    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 12:16:18.768039    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 12:16:19.139682    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1212 12:16:19.139734    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1212 12:16:19.243792    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 12:16:19.243812    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 12:16:19.243822    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 12:16:19.243838    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 12:16:19.244684    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1212 12:16:19.244697    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1212 12:16:24.226923    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1212 12:16:24.227078    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1212 12:16:24.227100    6422 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:16:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1212 12:16:29.728394    6422 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-675000
	
	I1212 12:16:29.728414    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:29.728720    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:29.728888    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:29.729080    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:29.729195    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:29.729436    6422 main.go:141] libmachine: Using SSH client type: native
	I1212 12:16:29.729766    6422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:16:29.729778    6422 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-675000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-675000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-675000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 12:16:29.809143    6422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 12:16:29.809162    6422 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17734-1975/.minikube CaCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17734-1975/.minikube}
	I1212 12:16:29.809174    6422 buildroot.go:174] setting up certificates
	I1212 12:16:29.809186    6422 provision.go:83] configureAuth start
	I1212 12:16:29.809193    6422 main.go:141] libmachine: (multinode-675000) Calling .GetMachineName
	I1212 12:16:29.809329    6422 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:16:29.809419    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:29.809510    6422 provision.go:138] copyHostCerts
	I1212 12:16:29.809539    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem
	I1212 12:16:29.809583    6422 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem, removing ...
	I1212 12:16:29.809591    6422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem
	I1212 12:16:29.809721    6422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem (1078 bytes)
	I1212 12:16:29.809950    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem
	I1212 12:16:29.809978    6422 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem, removing ...
	I1212 12:16:29.809982    6422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem
	I1212 12:16:29.810049    6422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem (1123 bytes)
	I1212 12:16:29.810203    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem
	I1212 12:16:29.810233    6422 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem, removing ...
	I1212 12:16:29.810240    6422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem
	I1212 12:16:29.810306    6422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem (1675 bytes)
	I1212 12:16:29.810465    6422 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem org=jenkins.multinode-675000 san=[192.169.0.13 192.169.0.13 localhost 127.0.0.1 minikube multinode-675000]
	I1212 12:16:29.973053    6422 provision.go:172] copyRemoteCerts
	I1212 12:16:29.973109    6422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 12:16:29.973127    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:29.973265    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:29.973358    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:29.973436    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:29.973523    6422 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:16:30.016350    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 12:16:30.016417    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 12:16:30.032607    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 12:16:30.032698    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 12:16:30.049317    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 12:16:30.049378    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 12:16:30.065188    6422 provision.go:86] duration metric: configureAuth took 255.989698ms
	I1212 12:16:30.065200    6422 buildroot.go:189] setting minikube options for container-runtime
	I1212 12:16:30.065322    6422 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:16:30.065337    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:30.065472    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:30.065564    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:30.065650    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.065747    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.065841    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:30.065956    6422 main.go:141] libmachine: Using SSH client type: native
	I1212 12:16:30.066197    6422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:16:30.066206    6422 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 12:16:30.139477    6422 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 12:16:30.139490    6422 buildroot.go:70] root file system type: tmpfs
	I1212 12:16:30.139657    6422 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 12:16:30.139697    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:30.139970    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:30.140143    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.140290    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.140432    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:30.140562    6422 main.go:141] libmachine: Using SSH client type: native
	I1212 12:16:30.140886    6422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:16:30.140952    6422 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 12:16:30.227855    6422 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 12:16:30.227885    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:30.228098    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:30.228209    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.228314    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.228400    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:30.228529    6422 main.go:141] libmachine: Using SSH client type: native
	I1212 12:16:30.228844    6422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:16:30.228857    6422 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 12:16:30.782295    6422 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 12:16:30.782316    6422 machine.go:91] provisioned docker machine in 12.146012107s
	I1212 12:16:30.782329    6422 start.go:300] post-start starting for "multinode-675000" (driver="hyperkit")
	I1212 12:16:30.782342    6422 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 12:16:30.782358    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:30.782626    6422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 12:16:30.782677    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:30.782790    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:30.782969    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.783154    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:30.783322    6422 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:16:30.827121    6422 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 12:16:30.830635    6422 command_runner.go:130] > NAME=Buildroot
	I1212 12:16:30.830650    6422 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 12:16:30.830659    6422 command_runner.go:130] > ID=buildroot
	I1212 12:16:30.830675    6422 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 12:16:30.830680    6422 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 12:16:30.830711    6422 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 12:16:30.830721    6422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/addons for local assets ...
	I1212 12:16:30.830788    6422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/files for local assets ...
	I1212 12:16:30.830927    6422 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> 31982.pem in /etc/ssl/certs
	I1212 12:16:30.830933    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> /etc/ssl/certs/31982.pem
	I1212 12:16:30.831092    6422 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 12:16:30.837636    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem --> /etc/ssl/certs/31982.pem (1708 bytes)
	I1212 12:16:30.854781    6422 start.go:303] post-start completed in 72.441616ms
	I1212 12:16:30.854797    6422 fix.go:56] fixHost completed within 12.411568764s
	I1212 12:16:30.854811    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:30.854944    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:30.855041    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.855130    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.855219    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:30.855339    6422 main.go:141] libmachine: Using SSH client type: native
	I1212 12:16:30.855574    6422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:16:30.855582    6422 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 12:16:30.929180    6422 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702412190.834980838
	
	I1212 12:16:30.929192    6422 fix.go:206] guest clock: 1702412190.834980838
	I1212 12:16:30.929197    6422 fix.go:219] Guest: 2023-12-12 12:16:30.834980838 -0800 PST Remote: 2023-12-12 12:16:30.8548 -0800 PST m=+12.940186645 (delta=-19.819162ms)
	I1212 12:16:30.929220    6422 fix.go:190] guest clock delta is within tolerance: -19.819162ms
	I1212 12:16:30.929224    6422 start.go:83] releasing machines lock for "multinode-675000", held for 12.486054192s
	I1212 12:16:30.929242    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:30.929379    6422 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:16:30.929490    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:30.929842    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:30.929950    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:16:30.930008    6422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 12:16:30.930043    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:30.930093    6422 ssh_runner.go:195] Run: cat /version.json
	I1212 12:16:30.930107    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:16:30.930161    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:30.930186    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:16:30.930281    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.930293    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:16:30.930380    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:30.930403    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:16:30.930474    6422 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:16:30.930521    6422 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:16:31.016914    6422 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 12:16:31.018009    6422 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 12:16:31.018181    6422 ssh_runner.go:195] Run: systemctl --version
	I1212 12:16:31.022217    6422 command_runner.go:130] > systemd 247 (247)
	I1212 12:16:31.022236    6422 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 12:16:31.022420    6422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 12:16:31.026092    6422 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 12:16:31.026121    6422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 12:16:31.026169    6422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 12:16:31.037180    6422 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 12:16:31.037207    6422 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 12:16:31.037215    6422 start.go:475] detecting cgroup driver to use...
	I1212 12:16:31.037322    6422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:16:31.050343    6422 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 12:16:31.050715    6422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 12:16:31.057229    6422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 12:16:31.063683    6422 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 12:16:31.063725    6422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 12:16:31.070190    6422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:16:31.076976    6422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 12:16:31.083811    6422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:16:31.090723    6422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 12:16:31.097726    6422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 12:16:31.104327    6422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 12:16:31.110089    6422 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 12:16:31.110193    6422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 12:16:31.116005    6422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:16:31.201329    6422 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 12:16:31.213913    6422 start.go:475] detecting cgroup driver to use...
	I1212 12:16:31.213984    6422 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 12:16:31.224398    6422 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 12:16:31.224985    6422 command_runner.go:130] > [Unit]
	I1212 12:16:31.224994    6422 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 12:16:31.224999    6422 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 12:16:31.225003    6422 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 12:16:31.225007    6422 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 12:16:31.225011    6422 command_runner.go:130] > StartLimitBurst=3
	I1212 12:16:31.225015    6422 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 12:16:31.225019    6422 command_runner.go:130] > [Service]
	I1212 12:16:31.225022    6422 command_runner.go:130] > Type=notify
	I1212 12:16:31.225026    6422 command_runner.go:130] > Restart=on-failure
	I1212 12:16:31.225033    6422 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 12:16:31.225041    6422 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 12:16:31.225047    6422 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 12:16:31.225052    6422 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 12:16:31.225059    6422 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 12:16:31.225064    6422 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 12:16:31.225070    6422 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 12:16:31.225078    6422 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 12:16:31.225084    6422 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 12:16:31.225088    6422 command_runner.go:130] > ExecStart=
	I1212 12:16:31.225103    6422 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I1212 12:16:31.225109    6422 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 12:16:31.225115    6422 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 12:16:31.225121    6422 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 12:16:31.225125    6422 command_runner.go:130] > LimitNOFILE=infinity
	I1212 12:16:31.225129    6422 command_runner.go:130] > LimitNPROC=infinity
	I1212 12:16:31.225133    6422 command_runner.go:130] > LimitCORE=infinity
	I1212 12:16:31.225137    6422 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 12:16:31.225153    6422 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 12:16:31.225157    6422 command_runner.go:130] > TasksMax=infinity
	I1212 12:16:31.225161    6422 command_runner.go:130] > TimeoutStartSec=0
	I1212 12:16:31.225168    6422 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 12:16:31.225171    6422 command_runner.go:130] > Delegate=yes
	I1212 12:16:31.225176    6422 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 12:16:31.225182    6422 command_runner.go:130] > KillMode=process
	I1212 12:16:31.225185    6422 command_runner.go:130] > [Install]
	I1212 12:16:31.225191    6422 command_runner.go:130] > WantedBy=multi-user.target
	I1212 12:16:31.225298    6422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:16:31.237810    6422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 12:16:31.251350    6422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:16:31.260525    6422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:16:31.269280    6422 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 12:16:31.289664    6422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:16:31.298962    6422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:16:31.311370    6422 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 12:16:31.311431    6422 ssh_runner.go:195] Run: which cri-dockerd
	I1212 12:16:31.313647    6422 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 12:16:31.313711    6422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 12:16:31.319239    6422 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 12:16:31.330188    6422 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 12:16:31.417062    6422 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 12:16:31.526600    6422 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 12:16:31.526679    6422 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 12:16:31.538460    6422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:16:31.626998    6422 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 12:16:32.863112    6422 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.236112242s)
	I1212 12:16:32.863171    6422 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:16:32.946040    6422 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 12:16:33.032807    6422 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:16:33.132038    6422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:16:33.229468    6422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 12:16:33.245281    6422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:16:33.350417    6422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 12:16:33.402733    6422 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 12:16:33.402810    6422 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 12:16:33.406503    6422 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 12:16:33.406523    6422 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 12:16:33.406532    6422 command_runner.go:130] > Device: 16h/22d	Inode: 862         Links: 1
	I1212 12:16:33.406541    6422 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 12:16:33.406547    6422 command_runner.go:130] > Access: 2023-12-12 20:16:33.311864608 +0000
	I1212 12:16:33.406553    6422 command_runner.go:130] > Modify: 2023-12-12 20:16:33.311864608 +0000
	I1212 12:16:33.406561    6422 command_runner.go:130] > Change: 2023-12-12 20:16:33.313894062 +0000
	I1212 12:16:33.406565    6422 command_runner.go:130] >  Birth: -
	I1212 12:16:33.406750    6422 start.go:543] Will wait 60s for crictl version
	I1212 12:16:33.406798    6422 ssh_runner.go:195] Run: which crictl
	I1212 12:16:33.409265    6422 command_runner.go:130] > /usr/bin/crictl
	I1212 12:16:33.409310    6422 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 12:16:33.448288    6422 command_runner.go:130] > Version:  0.1.0
	I1212 12:16:33.448301    6422 command_runner.go:130] > RuntimeName:  docker
	I1212 12:16:33.448305    6422 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 12:16:33.448309    6422 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 12:16:33.448382    6422 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 12:16:33.448454    6422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 12:16:33.465833    6422 command_runner.go:130] > 24.0.7
	I1212 12:16:33.466683    6422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 12:16:33.483274    6422 command_runner.go:130] > 24.0.7
	I1212 12:16:33.525896    6422 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 12:16:33.525968    6422 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:16:33.526304    6422 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1212 12:16:33.529568    6422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 12:16:33.538593    6422 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 12:16:33.538672    6422 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 12:16:33.551401    6422 docker.go:671] Got preloaded images: 
	I1212 12:16:33.551415    6422 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 12:16:33.551459    6422 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 12:16:33.557250    6422 command_runner.go:139] > {"Repositories":{}}
	I1212 12:16:33.557482    6422 ssh_runner.go:195] Run: which lz4
	I1212 12:16:33.559601    6422 command_runner.go:130] > /usr/bin/lz4
	I1212 12:16:33.559720    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1212 12:16:33.559827    6422 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 12:16:33.562128    6422 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 12:16:33.562264    6422 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 12:16:33.562281    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 12:16:35.077071    6422 docker.go:635] Took 1.517305 seconds to copy over tarball
	I1212 12:16:35.077132    6422 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 12:16:38.499447    6422 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.42234987s)
	I1212 12:16:38.499461    6422 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 12:16:38.525932    6422 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 12:16:38.531794    6422 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1212 12:16:38.531883    6422 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 12:16:38.544809    6422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:16:38.628242    6422 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 12:16:39.973616    6422 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.345374061s)
	I1212 12:16:39.973695    6422 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 12:16:39.987353    6422 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 12:16:39.987366    6422 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 12:16:39.987370    6422 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 12:16:39.987375    6422 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 12:16:39.987378    6422 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 12:16:39.987383    6422 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 12:16:39.987389    6422 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 12:16:39.987397    6422 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 12:16:39.987979    6422 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 12:16:39.987995    6422 cache_images.go:84] Images are preloaded, skipping loading
	I1212 12:16:39.988076    6422 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 12:16:40.006821    6422 command_runner.go:130] > cgroupfs
	I1212 12:16:40.007456    6422 cni.go:84] Creating CNI manager for ""
	I1212 12:16:40.007467    6422 cni.go:136] 1 nodes found, recommending kindnet
	I1212 12:16:40.007477    6422 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 12:16:40.007495    6422 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-675000 NodeName:multinode-675000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 12:16:40.007578    6422 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-675000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 12:16:40.007631    6422 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-675000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 12:16:40.007677    6422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 12:16:40.015958    6422 command_runner.go:130] > kubeadm
	I1212 12:16:40.015968    6422 command_runner.go:130] > kubectl
	I1212 12:16:40.015971    6422 command_runner.go:130] > kubelet
	I1212 12:16:40.015990    6422 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 12:16:40.016037    6422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 12:16:40.022211    6422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1212 12:16:40.033284    6422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 12:16:40.045171    6422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1212 12:16:40.056695    6422 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I1212 12:16:40.058928    6422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 12:16:40.066622    6422 certs.go:56] Setting up /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000 for IP: 192.169.0.13
	I1212 12:16:40.066640    6422 certs.go:190] acquiring lock for shared ca certs: {Name:mk3a28fc3e7d169ec96b49a3f31bfa6edcaf7ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:16:40.066810    6422 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.key
	I1212 12:16:40.066881    6422 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.key
	I1212 12:16:40.066939    6422 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.key
	I1212 12:16:40.066953    6422 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.crt with IP's: []
	I1212 12:16:40.219695    6422 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.crt ...
	I1212 12:16:40.219710    6422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.crt: {Name:mk9d4a09c5343657b45a4d82f7cd1b970d37a0e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:16:40.220005    6422 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.key ...
	I1212 12:16:40.220014    6422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.key: {Name:mk5c453e27c1b6769b6ac164334fb3d1c0c33715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:16:40.220227    6422 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.key.ff8d457b
	I1212 12:16:40.220242    6422 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.crt.ff8d457b with IP's: [192.169.0.13 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 12:16:40.526420    6422 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.crt.ff8d457b ...
	I1212 12:16:40.526435    6422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.crt.ff8d457b: {Name:mk19ef428dc37c16fa12952902fe02b789c3f964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:16:40.526722    6422 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.key.ff8d457b ...
	I1212 12:16:40.526731    6422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.key.ff8d457b: {Name:mkdabf8646659cd79be11279336b2e48bc96bd7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:16:40.526947    6422 certs.go:337] copying /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.crt.ff8d457b -> /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.crt
	I1212 12:16:40.527127    6422 certs.go:341] copying /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.key.ff8d457b -> /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.key
	I1212 12:16:40.527303    6422 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.key
	I1212 12:16:40.527337    6422 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.crt with IP's: []
	I1212 12:16:40.572830    6422 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.crt ...
	I1212 12:16:40.572843    6422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.crt: {Name:mk66945d2b1f0c010a4bf76f11b690686c1edcfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:16:40.573090    6422 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.key ...
	I1212 12:16:40.573099    6422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.key: {Name:mk0ac0b918cae8c5f6a7cbe4da1304d61f0931cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:16:40.573299    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 12:16:40.573328    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 12:16:40.573352    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 12:16:40.573371    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 12:16:40.573388    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 12:16:40.573404    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 12:16:40.573420    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 12:16:40.573437    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 12:16:40.573531    6422 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198.pem (1338 bytes)
	W1212 12:16:40.573581    6422 certs.go:433] ignoring /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198_empty.pem, impossibly tiny 0 bytes
	I1212 12:16:40.573592    6422 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 12:16:40.573625    6422 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem (1078 bytes)
	I1212 12:16:40.573654    6422 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem (1123 bytes)
	I1212 12:16:40.573684    6422 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem (1675 bytes)
	I1212 12:16:40.573749    6422 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem (1708 bytes)
	I1212 12:16:40.573779    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:16:40.573796    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198.pem -> /usr/share/ca-certificates/3198.pem
	I1212 12:16:40.573811    6422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> /usr/share/ca-certificates/31982.pem
	I1212 12:16:40.574255    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 12:16:40.591113    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 12:16:40.606805    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 12:16:40.622326    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 12:16:40.637937    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 12:16:40.654313    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 12:16:40.670153    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 12:16:40.686334    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 12:16:40.702518    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 12:16:40.718397    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198.pem --> /usr/share/ca-certificates/3198.pem (1338 bytes)
	I1212 12:16:40.734454    6422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem --> /usr/share/ca-certificates/31982.pem (1708 bytes)
	I1212 12:16:40.750857    6422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 12:16:40.761969    6422 ssh_runner.go:195] Run: openssl version
	I1212 12:16:40.765214    6422 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 12:16:40.765405    6422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3198.pem && ln -fs /usr/share/ca-certificates/3198.pem /etc/ssl/certs/3198.pem"
	I1212 12:16:40.772242    6422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3198.pem
	I1212 12:16:40.775003    6422 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:03 /usr/share/ca-certificates/3198.pem
	I1212 12:16:40.775165    6422 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:03 /usr/share/ca-certificates/3198.pem
	I1212 12:16:40.775203    6422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3198.pem
	I1212 12:16:40.778558    6422 command_runner.go:130] > 51391683
	I1212 12:16:40.778718    6422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3198.pem /etc/ssl/certs/51391683.0"
	I1212 12:16:40.785935    6422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31982.pem && ln -fs /usr/share/ca-certificates/31982.pem /etc/ssl/certs/31982.pem"
	I1212 12:16:40.793541    6422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31982.pem
	I1212 12:16:40.796492    6422 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:03 /usr/share/ca-certificates/31982.pem
	I1212 12:16:40.796687    6422 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:03 /usr/share/ca-certificates/31982.pem
	I1212 12:16:40.796740    6422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31982.pem
	I1212 12:16:40.800075    6422 command_runner.go:130] > 3ec20f2e
	I1212 12:16:40.800311    6422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31982.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 12:16:40.807264    6422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 12:16:40.814220    6422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:16:40.816857    6422 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:58 /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:16:40.817010    6422 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:58 /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:16:40.817048    6422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:16:40.820335    6422 command_runner.go:130] > b5213941
	I1212 12:16:40.820563    6422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 12:16:40.827423    6422 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 12:16:40.829779    6422 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 12:16:40.829905    6422 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 12:16:40.829953    6422 kubeadm.go:404] StartCluster: {Name:multinode-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:multinode-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:16:40.830036    6422 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 12:16:40.846640    6422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 12:16:40.853371    6422 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1212 12:16:40.853383    6422 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1212 12:16:40.853389    6422 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1212 12:16:40.853510    6422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 12:16:40.860117    6422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 12:16:40.866430    6422 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 12:16:40.866443    6422 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 12:16:40.866451    6422 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 12:16:40.866471    6422 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 12:16:40.866581    6422 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 12:16:40.866604    6422 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 12:16:40.934190    6422 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 12:16:40.934196    6422 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1212 12:16:40.934239    6422 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 12:16:40.934245    6422 command_runner.go:130] > [preflight] Running pre-flight checks
	I1212 12:16:41.099540    6422 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 12:16:41.099544    6422 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 12:16:41.099641    6422 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 12:16:41.099651    6422 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 12:16:41.099741    6422 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 12:16:41.099747    6422 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 12:16:41.327618    6422 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 12:16:41.383656    6422 out.go:204]   - Generating certificates and keys ...
	I1212 12:16:41.327646    6422 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 12:16:41.383821    6422 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 12:16:41.383844    6422 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 12:16:41.383901    6422 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 12:16:41.383908    6422 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 12:16:41.427204    6422 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 12:16:41.427215    6422 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 12:16:41.704154    6422 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 12:16:41.704168    6422 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1212 12:16:41.837927    6422 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 12:16:41.837946    6422 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1212 12:16:41.928747    6422 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 12:16:41.928754    6422 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1212 12:16:42.361323    6422 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 12:16:42.361345    6422 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1212 12:16:42.361491    6422 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-675000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I1212 12:16:42.361500    6422 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-675000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I1212 12:16:42.570936    6422 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 12:16:42.570952    6422 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1212 12:16:42.571067    6422 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-675000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I1212 12:16:42.571075    6422 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-675000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I1212 12:16:42.733889    6422 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 12:16:42.733896    6422 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 12:16:42.870087    6422 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 12:16:42.870138    6422 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 12:16:43.107934    6422 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 12:16:43.107942    6422 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1212 12:16:43.108166    6422 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 12:16:43.108176    6422 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 12:16:43.328324    6422 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 12:16:43.328332    6422 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 12:16:43.450252    6422 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 12:16:43.450260    6422 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 12:16:43.645275    6422 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 12:16:43.645276    6422 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 12:16:43.793273    6422 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 12:16:43.793278    6422 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 12:16:43.793884    6422 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 12:16:43.793895    6422 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 12:16:43.796597    6422 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 12:16:43.796611    6422 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 12:16:43.818143    6422 out.go:204]   - Booting up control plane ...
	I1212 12:16:43.818213    6422 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 12:16:43.818242    6422 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 12:16:43.818337    6422 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 12:16:43.818345    6422 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 12:16:43.818406    6422 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 12:16:43.818413    6422 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 12:16:43.818498    6422 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 12:16:43.818504    6422 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 12:16:43.818582    6422 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 12:16:43.818596    6422 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 12:16:43.818626    6422 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 12:16:43.818630    6422 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 12:16:43.900708    6422 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 12:16:43.900722    6422 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 12:16:49.883247    6422 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.002237 seconds
	I1212 12:16:49.883281    6422 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.002237 seconds
	I1212 12:16:49.883440    6422 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 12:16:49.883446    6422 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 12:16:49.895248    6422 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 12:16:49.895272    6422 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 12:16:50.409686    6422 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 12:16:50.409708    6422 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1212 12:16:50.409867    6422 kubeadm.go:322] [mark-control-plane] Marking the node multinode-675000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 12:16:50.409876    6422 command_runner.go:130] > [mark-control-plane] Marking the node multinode-675000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 12:16:50.919811    6422 kubeadm.go:322] [bootstrap-token] Using token: 1g2iv6.43tegdxwoknkhroa
	I1212 12:16:50.919823    6422 command_runner.go:130] > [bootstrap-token] Using token: 1g2iv6.43tegdxwoknkhroa
	I1212 12:16:50.959481    6422 out.go:204]   - Configuring RBAC rules ...
	I1212 12:16:50.959577    6422 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 12:16:50.959586    6422 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 12:16:50.962282    6422 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 12:16:50.962290    6422 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 12:16:50.968399    6422 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 12:16:50.968401    6422 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 12:16:50.971461    6422 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 12:16:50.971481    6422 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 12:16:50.974269    6422 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 12:16:50.974284    6422 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 12:16:50.976849    6422 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 12:16:50.976864    6422 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 12:16:50.986067    6422 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 12:16:50.986076    6422 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 12:16:51.165049    6422 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 12:16:51.165073    6422 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 12:16:51.368652    6422 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 12:16:51.368671    6422 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 12:16:51.369447    6422 kubeadm.go:322] 
	I1212 12:16:51.369488    6422 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 12:16:51.369498    6422 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1212 12:16:51.369507    6422 kubeadm.go:322] 
	I1212 12:16:51.369586    6422 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 12:16:51.369593    6422 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1212 12:16:51.369600    6422 kubeadm.go:322] 
	I1212 12:16:51.369637    6422 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 12:16:51.369648    6422 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1212 12:16:51.369733    6422 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 12:16:51.369741    6422 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 12:16:51.369792    6422 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 12:16:51.369798    6422 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 12:16:51.369803    6422 kubeadm.go:322] 
	I1212 12:16:51.369842    6422 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 12:16:51.369851    6422 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1212 12:16:51.369860    6422 kubeadm.go:322] 
	I1212 12:16:51.369918    6422 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 12:16:51.369925    6422 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 12:16:51.369928    6422 kubeadm.go:322] 
	I1212 12:16:51.369967    6422 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 12:16:51.369975    6422 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1212 12:16:51.370032    6422 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 12:16:51.370038    6422 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 12:16:51.370094    6422 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 12:16:51.370101    6422 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 12:16:51.370109    6422 kubeadm.go:322] 
	I1212 12:16:51.370196    6422 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 12:16:51.370206    6422 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1212 12:16:51.370262    6422 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 12:16:51.370274    6422 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1212 12:16:51.370285    6422 kubeadm.go:322] 
	I1212 12:16:51.370359    6422 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1g2iv6.43tegdxwoknkhroa \
	I1212 12:16:51.370367    6422 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 1g2iv6.43tegdxwoknkhroa \
	I1212 12:16:51.370448    6422 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f99f9657aff247a8042444d6497aa99debec968500b23dc54ae1da873e195109 \
	I1212 12:16:51.370454    6422 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f99f9657aff247a8042444d6497aa99debec968500b23dc54ae1da873e195109 \
	I1212 12:16:51.370473    6422 kubeadm.go:322] 	--control-plane 
	I1212 12:16:51.370478    6422 command_runner.go:130] > 	--control-plane 
	I1212 12:16:51.370490    6422 kubeadm.go:322] 
	I1212 12:16:51.370567    6422 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 12:16:51.370574    6422 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1212 12:16:51.370577    6422 kubeadm.go:322] 
	I1212 12:16:51.370656    6422 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1g2iv6.43tegdxwoknkhroa \
	I1212 12:16:51.370662    6422 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1g2iv6.43tegdxwoknkhroa \
	I1212 12:16:51.370775    6422 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f99f9657aff247a8042444d6497aa99debec968500b23dc54ae1da873e195109 
	I1212 12:16:51.370782    6422 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f99f9657aff247a8042444d6497aa99debec968500b23dc54ae1da873e195109 
	I1212 12:16:51.371691    6422 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 12:16:51.371700    6422 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 12:16:51.371718    6422 cni.go:84] Creating CNI manager for ""
	I1212 12:16:51.371725    6422 cni.go:136] 1 nodes found, recommending kindnet
	I1212 12:16:51.409058    6422 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 12:16:51.445003    6422 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 12:16:51.450735    6422 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 12:16:51.450757    6422 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 12:16:51.450762    6422 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 12:16:51.450768    6422 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 12:16:51.450775    6422 command_runner.go:130] > Access: 2023-12-12 20:16:27.330111218 +0000
	I1212 12:16:51.450780    6422 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 12:16:51.450784    6422 command_runner.go:130] > Change: 2023-12-12 20:16:26.047111307 +0000
	I1212 12:16:51.450789    6422 command_runner.go:130] >  Birth: -
	I1212 12:16:51.450995    6422 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 12:16:51.451008    6422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 12:16:51.467388    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 12:16:52.025242    6422 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1212 12:16:52.029609    6422 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1212 12:16:52.035748    6422 command_runner.go:130] > serviceaccount/kindnet created
	I1212 12:16:52.044662    6422 command_runner.go:130] > daemonset.apps/kindnet created
	I1212 12:16:52.049572    6422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 12:16:52.049625    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=multinode-675000 minikube.k8s.io/updated_at=2023_12_12T12_16_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:52.049625    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:52.139170    6422 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1212 12:16:52.140893    6422 command_runner.go:130] > -16
	I1212 12:16:52.140909    6422 ops.go:34] apiserver oom_adj: -16
	I1212 12:16:52.140936    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:52.157264    6422 command_runner.go:130] > node/multinode-675000 labeled
	I1212 12:16:52.225230    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:52.225393    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:52.298946    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:52.800071    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:52.863295    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:53.301242    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:53.373987    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:53.800939    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:53.871168    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:54.301218    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:54.368599    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:54.799179    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:54.864423    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:55.300730    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:55.360635    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:55.799909    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:55.867437    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:56.299632    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:56.372442    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:56.800510    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:56.864795    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:57.299706    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:57.371983    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:57.800381    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:57.862656    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:58.301145    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:58.368418    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:58.799267    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:58.859179    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:59.300129    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:59.371799    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:16:59.800472    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:16:59.865105    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:17:00.300193    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:17:00.364175    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:17:00.799344    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:17:00.865751    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:17:01.299071    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:17:01.375477    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:17:01.799599    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:17:01.868634    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:17:02.299758    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:17:02.369133    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:17:02.799311    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:17:02.875936    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:17:03.299634    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:17:03.369937    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:17:03.799996    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:17:03.861379    6422 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1212 12:17:04.300053    6422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:17:04.400965    6422 command_runner.go:130] > NAME      SECRETS   AGE
	I1212 12:17:04.400976    6422 command_runner.go:130] > default   0         0s
	I1212 12:17:04.401861    6422 kubeadm.go:1088] duration metric: took 12.352463026s to wait for elevateKubeSystemPrivileges.
	I1212 12:17:04.401875    6422 kubeadm.go:406] StartCluster complete in 23.572272751s
	I1212 12:17:04.401889    6422 settings.go:142] acquiring lock: {Name:mk437dff6ee4f62ea2311e5ad7dccf890596936f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:17:04.401963    6422 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:17:04.402477    6422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/kubeconfig: {Name:mk6d5ef4e0f8c6a055bbd7ff4a33097a831e2d15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:17:04.402738    6422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 12:17:04.402762    6422 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 12:17:04.402803    6422 addons.go:69] Setting storage-provisioner=true in profile "multinode-675000"
	I1212 12:17:04.402813    6422 addons.go:69] Setting default-storageclass=true in profile "multinode-675000"
	I1212 12:17:04.402842    6422 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-675000"
	I1212 12:17:04.402819    6422 addons.go:231] Setting addon storage-provisioner=true in "multinode-675000"
	I1212 12:17:04.402902    6422 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:17:04.402940    6422 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:17:04.402988    6422 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:17:04.403106    6422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:17:04.403133    6422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:17:04.403224    6422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:17:04.403242    6422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:17:04.403266    6422 kapi.go:59] client config for multinode-675000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.key", CAFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 12:17:04.405631    6422 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 12:17:04.406860    6422 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 12:17:04.406870    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:04.406878    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:04.406883    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:04.412083    6422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51347
	I1212 12:17:04.412416    6422 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:17:04.412504    6422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51349
	I1212 12:17:04.412811    6422 main.go:141] libmachine: Using API Version  1
	I1212 12:17:04.412827    6422 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:17:04.412889    6422 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:17:04.413056    6422 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:17:04.413180    6422 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:17:04.413236    6422 main.go:141] libmachine: Using API Version  1
	I1212 12:17:04.413249    6422 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:17:04.413283    6422 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:17:04.413366    6422 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 12:17:04.413382    6422 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6434
	I1212 12:17:04.413385    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:04.413391    6422 round_trippers.go:580]     Audit-Id: ec73ddc8-9bc8-4def-ad5c-efa716f53914
	I1212 12:17:04.413396    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:04.413401    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:04.413406    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:04.413411    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:04.413419    6422 round_trippers.go:580]     Content-Length: 291
	I1212 12:17:04.413424    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:04 GMT
	I1212 12:17:04.413466    6422 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:17:04.413469    6422 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1a3fb229-343b-479d-911a-188712e3cca3","resourceVersion":"372","creationTimestamp":"2023-12-12T20:16:51Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 12:17:04.413803    6422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:17:04.413816    6422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:17:04.413814    6422 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1a3fb229-343b-479d-911a-188712e3cca3","resourceVersion":"372","creationTimestamp":"2023-12-12T20:16:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 12:17:04.413854    6422 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 12:17:04.413864    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:04.413870    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:04.413876    6422 round_trippers.go:473]     Content-Type: application/json
	I1212 12:17:04.413881    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:04.415006    6422 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:17:04.415301    6422 kapi.go:59] client config for multinode-675000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.key", CAFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 12:17:04.416180    6422 addons.go:231] Setting addon default-storageclass=true in "multinode-675000"
	I1212 12:17:04.416201    6422 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:17:04.416457    6422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:17:04.416557    6422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:17:04.418498    6422 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 12:17:04.418522    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:04.418528    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:04 GMT
	I1212 12:17:04.418532    6422 round_trippers.go:580]     Audit-Id: 9d37d47d-de5f-4a83-a88b-73201cf6a4cf
	I1212 12:17:04.418536    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:04.418541    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:04.418545    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:04.418549    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:04.418573    6422 round_trippers.go:580]     Content-Length: 291
	I1212 12:17:04.418597    6422 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1a3fb229-343b-479d-911a-188712e3cca3","resourceVersion":"375","creationTimestamp":"2023-12-12T20:16:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 12:17:04.418727    6422 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 12:17:04.418737    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:04.418742    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:04.418747    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:04.420230    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:04.420243    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:04.420249    6422 round_trippers.go:580]     Audit-Id: 083ce8f8-1051-4b0f-b76b-71e2f3fc03fa
	I1212 12:17:04.420255    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:04.420262    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:04.420270    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:04.420275    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:04.420279    6422 round_trippers.go:580]     Content-Length: 291
	I1212 12:17:04.420284    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:04 GMT
	I1212 12:17:04.420309    6422 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1a3fb229-343b-479d-911a-188712e3cca3","resourceVersion":"375","creationTimestamp":"2023-12-12T20:16:51Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1212 12:17:04.420363    6422 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-675000" context rescaled to 1 replicas
	I1212 12:17:04.420383    6422 start.go:223] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 12:17:04.444606    6422 out.go:177] * Verifying Kubernetes components...
	I1212 12:17:04.422488    6422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51351
	I1212 12:17:04.425378    6422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51352
	I1212 12:17:04.485819    6422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 12:17:04.486270    6422 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:17:04.486377    6422 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:17:04.486645    6422 main.go:141] libmachine: Using API Version  1
	I1212 12:17:04.486656    6422 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:17:04.486736    6422 main.go:141] libmachine: Using API Version  1
	I1212 12:17:04.486750    6422 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:17:04.486935    6422 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:17:04.486986    6422 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:17:04.487156    6422 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:17:04.487264    6422 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:17:04.487357    6422 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6434
	I1212 12:17:04.487489    6422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:17:04.487539    6422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:17:04.489478    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:04.510766    6422 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 12:17:04.496328    6422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51355
	I1212 12:17:04.500458    6422 command_runner.go:130] > apiVersion: v1
	I1212 12:17:04.510791    6422 command_runner.go:130] > data:
	I1212 12:17:04.531600    6422 command_runner.go:130] >   Corefile: |
	I1212 12:17:04.511217    6422 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:17:04.531622    6422 command_runner.go:130] >     .:53 {
	I1212 12:17:04.531651    6422 command_runner.go:130] >         errors
	I1212 12:17:04.531693    6422 command_runner.go:130] >         health {
	I1212 12:17:04.531713    6422 command_runner.go:130] >            lameduck 5s
	I1212 12:17:04.531766    6422 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 12:17:04.531778    6422 command_runner.go:130] >         }
	I1212 12:17:04.531795    6422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 12:17:04.531801    6422 command_runner.go:130] >         ready
	I1212 12:17:04.531832    6422 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 12:17:04.531840    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:04.531875    6422 command_runner.go:130] >            pods insecure
	I1212 12:17:04.531891    6422 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 12:17:04.531907    6422 command_runner.go:130] >            ttl 30
	I1212 12:17:04.531919    6422 command_runner.go:130] >         }
	I1212 12:17:04.531929    6422 command_runner.go:130] >         prometheus :9153
	I1212 12:17:04.531938    6422 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 12:17:04.531948    6422 command_runner.go:130] >            max_concurrent 1000
	I1212 12:17:04.531956    6422 command_runner.go:130] >         }
	I1212 12:17:04.531969    6422 command_runner.go:130] >         cache 30
	I1212 12:17:04.531982    6422 command_runner.go:130] >         loop
	I1212 12:17:04.531996    6422 command_runner.go:130] >         reload
	I1212 12:17:04.532012    6422 command_runner.go:130] >         loadbalance
	I1212 12:17:04.532026    6422 command_runner.go:130] >     }
	I1212 12:17:04.532046    6422 command_runner.go:130] > kind: ConfigMap
	I1212 12:17:04.532059    6422 command_runner.go:130] > metadata:
	I1212 12:17:04.532085    6422 command_runner.go:130] >   creationTimestamp: "2023-12-12T20:16:51Z"
	I1212 12:17:04.532101    6422 command_runner.go:130] >   name: coredns
	I1212 12:17:04.532115    6422 command_runner.go:130] >   namespace: kube-system
	I1212 12:17:04.532129    6422 command_runner.go:130] >   resourceVersion: "263"
	I1212 12:17:04.532142    6422 command_runner.go:130] >   uid: 0a612017-7a35-4efe-a969-615a6e8509a6
	I1212 12:17:04.532296    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:04.532534    6422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 12:17:04.532679    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:04.532768    6422 main.go:141] libmachine: Using API Version  1
	I1212 12:17:04.532790    6422 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:17:04.532919    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:04.532922    6422 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:17:04.533127    6422 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:17:04.533259    6422 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:17:04.533277    6422 kapi.go:59] client config for multinode-675000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.key", CAFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 12:17:04.533448    6422 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:17:04.533583    6422 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:17:04.533666    6422 node_ready.go:35] waiting up to 6m0s for node "multinode-675000" to be "Ready" ...
	I1212 12:17:04.533678    6422 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6434
	I1212 12:17:04.533740    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:04.533748    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:04.533756    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:04.533763    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:04.535232    6422 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:04.535427    6422 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 12:17:04.535436    6422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 12:17:04.535446    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:04.535546    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:04.535636    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:04.535744    6422 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:04.535833    6422 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:17:04.539474    6422 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 12:17:04.539487    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:04.539493    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:04.539497    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:04.539502    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:04.539506    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:04.539511    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:04 GMT
	I1212 12:17:04.539516    6422 round_trippers.go:580]     Audit-Id: 5be7c4fb-c203-49a5-b923-73e8df2f9526
	I1212 12:17:04.541977    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:04.542504    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:04.542512    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:04.542519    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:04.542525    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:04.545171    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:04.545184    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:04.545199    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:04.545204    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:04.545210    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:04 GMT
	I1212 12:17:04.545215    6422 round_trippers.go:580]     Audit-Id: 0b32046e-1364-4efa-8c64-1910dcd00bf7
	I1212 12:17:04.545220    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:04.545226    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:04.546626    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:04.766755    6422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 12:17:04.767216    6422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 12:17:05.047762    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:05.047777    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:05.047784    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:05.047789    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:05.049413    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:05.049424    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:05.049432    6422 round_trippers.go:580]     Audit-Id: 5c3abb43-4c99-4524-9756-79bf33e36a81
	I1212 12:17:05.049443    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:05.049452    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:05.049457    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:05.049462    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:05.049466    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:05 GMT
	I1212 12:17:05.049552    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:05.207713    6422 command_runner.go:130] > configmap/coredns replaced
	I1212 12:17:05.207736    6422 start.go:929] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1212 12:17:05.218455    6422 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1212 12:17:05.220072    6422 main.go:141] libmachine: Making call to close driver server
	I1212 12:17:05.220083    6422 main.go:141] libmachine: (multinode-675000) Calling .Close
	I1212 12:17:05.220268    6422 main.go:141] libmachine: (multinode-675000) DBG | Closing plugin on server side
	I1212 12:17:05.220272    6422 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:17:05.220280    6422 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:17:05.220286    6422 main.go:141] libmachine: Making call to close driver server
	I1212 12:17:05.220291    6422 main.go:141] libmachine: (multinode-675000) Calling .Close
	I1212 12:17:05.220422    6422 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:17:05.220438    6422 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:17:05.220444    6422 main.go:141] libmachine: (multinode-675000) DBG | Closing plugin on server side
	I1212 12:17:05.220516    6422 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 12:17:05.220522    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:05.220528    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:05.220535    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:05.221989    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:05.222000    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:05.222005    6422 round_trippers.go:580]     Content-Length: 1273
	I1212 12:17:05.222010    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:05 GMT
	I1212 12:17:05.222015    6422 round_trippers.go:580]     Audit-Id: b32405e4-77f5-45ce-a565-6a5c246e1485
	I1212 12:17:05.222020    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:05.222025    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:05.222030    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:05.222037    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:05.222078    6422 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"8c461a28-e249-492c-9549-9d63fb276924","resourceVersion":"402","creationTimestamp":"2023-12-12T20:17:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T20:17:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 12:17:05.222329    6422 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8c461a28-e249-492c-9549-9d63fb276924","resourceVersion":"402","creationTimestamp":"2023-12-12T20:17:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T20:17:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 12:17:05.222359    6422 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 12:17:05.222364    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:05.222371    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:05.222377    6422 round_trippers.go:473]     Content-Type: application/json
	I1212 12:17:05.222381    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:05.224038    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:05.224046    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:05.224051    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:05.224075    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:05.224083    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:05.224088    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:05.224092    6422 round_trippers.go:580]     Content-Length: 1220
	I1212 12:17:05.224097    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:05 GMT
	I1212 12:17:05.224102    6422 round_trippers.go:580]     Audit-Id: 0a3aad08-721a-4066-9004-856be75b22f2
	I1212 12:17:05.224117    6422 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8c461a28-e249-492c-9549-9d63fb276924","resourceVersion":"402","creationTimestamp":"2023-12-12T20:17:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T20:17:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 12:17:05.224184    6422 main.go:141] libmachine: Making call to close driver server
	I1212 12:17:05.224193    6422 main.go:141] libmachine: (multinode-675000) Calling .Close
	I1212 12:17:05.224339    6422 main.go:141] libmachine: (multinode-675000) DBG | Closing plugin on server side
	I1212 12:17:05.224349    6422 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:17:05.224356    6422 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:17:05.382686    6422 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1212 12:17:05.387096    6422 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1212 12:17:05.392000    6422 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 12:17:05.397436    6422 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1212 12:17:05.404599    6422 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1212 12:17:05.411159    6422 command_runner.go:130] > pod/storage-provisioner created
	I1212 12:17:05.414631    6422 main.go:141] libmachine: Making call to close driver server
	I1212 12:17:05.414644    6422 main.go:141] libmachine: (multinode-675000) Calling .Close
	I1212 12:17:05.414793    6422 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:17:05.414798    6422 main.go:141] libmachine: (multinode-675000) DBG | Closing plugin on server side
	I1212 12:17:05.414801    6422 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:17:05.414813    6422 main.go:141] libmachine: Making call to close driver server
	I1212 12:17:05.414830    6422 main.go:141] libmachine: (multinode-675000) Calling .Close
	I1212 12:17:05.414950    6422 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:17:05.414960    6422 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:17:05.453176    6422 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1212 12:17:05.494570    6422 addons.go:502] enable addons completed in 1.09182301s: enabled=[default-storageclass storage-provisioner]
	I1212 12:17:05.547279    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:05.547292    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:05.547299    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:05.547303    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:05.549139    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:05.549151    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:05.549156    6422 round_trippers.go:580]     Audit-Id: 24685c98-59fa-4064-86f8-7e00bd6e9a0d
	I1212 12:17:05.549164    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:05.549169    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:05.549175    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:05.549183    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:05.549190    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:05 GMT
	I1212 12:17:05.549325    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:06.048643    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:06.048665    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:06.048677    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:06.048686    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:06.051435    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:06.051452    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:06.051462    6422 round_trippers.go:580]     Audit-Id: effedc75-cad6-4270-bcae-b65e1e392050
	I1212 12:17:06.051474    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:06.051484    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:06.051495    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:06.051505    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:06.051513    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:06 GMT
	I1212 12:17:06.051665    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:06.548657    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:06.548681    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:06.548693    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:06.548703    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:06.551418    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:06.551433    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:06.551441    6422 round_trippers.go:580]     Audit-Id: 2826df2b-0374-4ba8-8157-ae9c382f5646
	I1212 12:17:06.551447    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:06.551453    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:06.551461    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:06.551466    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:06.551472    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:06 GMT
	I1212 12:17:06.551567    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:06.551818    6422 node_ready.go:58] node "multinode-675000" has status "Ready":"False"
	I1212 12:17:07.046969    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:07.046982    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:07.046993    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:07.047011    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:07.050345    6422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:17:07.050356    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:07.050376    6422 round_trippers.go:580]     Audit-Id: 649ceabc-f5a0-46eb-b8d9-d466c93c651c
	I1212 12:17:07.050399    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:07.050408    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:07.050413    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:07.050418    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:07.050423    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:07 GMT
	I1212 12:17:07.050547    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:07.546991    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:07.547016    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:07.547029    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:07.547039    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:07.549787    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:07.549803    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:07.549811    6422 round_trippers.go:580]     Audit-Id: ed186526-9b73-4999-8cf7-78b01a295ad2
	I1212 12:17:07.549817    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:07.549823    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:07.549829    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:07.549836    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:07.549844    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:07 GMT
	I1212 12:17:07.550012    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:08.047365    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:08.047384    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:08.047397    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:08.047406    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:08.050323    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:08.050337    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:08.050344    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:08.050351    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:08 GMT
	I1212 12:17:08.050362    6422 round_trippers.go:580]     Audit-Id: a621cf18-a9a6-4586-a74c-0694f5b30a31
	I1212 12:17:08.050370    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:08.050376    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:08.050395    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:08.050471    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:08.547155    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:08.547170    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:08.547177    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:08.547182    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:08.548744    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:08.548756    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:08.548764    6422 round_trippers.go:580]     Audit-Id: b5156704-8581-48b8-ac6c-17af156ace16
	I1212 12:17:08.548769    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:08.548774    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:08.548779    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:08.548784    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:08.548800    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:08 GMT
	I1212 12:17:08.549040    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:09.048749    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:09.048763    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:09.048769    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:09.048774    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:09.050341    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:09.050352    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:09.050361    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:09.050368    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:09 GMT
	I1212 12:17:09.050373    6422 round_trippers.go:580]     Audit-Id: 6f775614-2d0e-449f-b1e7-90cb0545ed37
	I1212 12:17:09.050377    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:09.050382    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:09.050387    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:09.050498    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:09.050682    6422 node_ready.go:58] node "multinode-675000" has status "Ready":"False"
	I1212 12:17:09.547680    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:09.547692    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:09.547698    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:09.547704    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:09.551047    6422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:17:09.551079    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:09.551090    6422 round_trippers.go:580]     Audit-Id: 24daa2aa-858f-48a4-8384-167d65088165
	I1212 12:17:09.551106    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:09.551114    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:09.551123    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:09.551130    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:09.551138    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:09 GMT
	I1212 12:17:09.551209    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:10.047732    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:10.047749    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:10.047758    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:10.047763    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:10.049815    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:10.049826    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:10.049831    6422 round_trippers.go:580]     Audit-Id: e56c53a6-b23e-4872-a827-12520e484371
	I1212 12:17:10.049836    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:10.049841    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:10.049845    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:10.049851    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:10.049860    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:10 GMT
	I1212 12:17:10.050204    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:10.546854    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:10.546876    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:10.546883    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:10.546888    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:10.548540    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:10.548551    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:10.548560    6422 round_trippers.go:580]     Audit-Id: 52c45aba-2dab-47e1-b6f3-8a88ca6ae763
	I1212 12:17:10.548567    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:10.548573    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:10.548580    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:10.548586    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:10.548591    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:10 GMT
	I1212 12:17:10.548849    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:11.047193    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:11.047210    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:11.047235    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:11.047240    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:11.048726    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:11.048736    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:11.048742    6422 round_trippers.go:580]     Audit-Id: cb152ac9-6dab-47d9-bef6-a03f03c6f982
	I1212 12:17:11.048746    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:11.048751    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:11.048758    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:11.048767    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:11.048777    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:11 GMT
	I1212 12:17:11.048919    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:11.547654    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:11.547667    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:11.547674    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:11.547681    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:11.549461    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:11.549475    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:11.549483    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:11.549490    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:11.549499    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:11.549506    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:11.549520    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:11 GMT
	I1212 12:17:11.549525    6422 round_trippers.go:580]     Audit-Id: 8dcf9f12-21e9-4ba7-8d76-45d59447d3e8
	I1212 12:17:11.549659    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:11.549842    6422 node_ready.go:58] node "multinode-675000" has status "Ready":"False"
	I1212 12:17:12.047726    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:12.047749    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:12.047761    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:12.047771    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:12.051309    6422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:17:12.051336    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:12.051349    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:12.051359    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:12 GMT
	I1212 12:17:12.051367    6422 round_trippers.go:580]     Audit-Id: 66b5b941-6a88-47cb-83f8-369d80c553a3
	I1212 12:17:12.051376    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:12.051386    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:12.051395    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:12.051587    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:12.547650    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:12.547677    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:12.547690    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:12.547700    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:12.550700    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:12.550719    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:12.550728    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:12 GMT
	I1212 12:17:12.550757    6422 round_trippers.go:580]     Audit-Id: 8971ab43-6d8e-4758-ab54-bcfd2eeff346
	I1212 12:17:12.550768    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:12.550787    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:12.550794    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:12.550800    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:12.550899    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:13.047290    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:13.047305    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:13.047312    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:13.047317    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:13.049055    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:13.049067    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:13.049072    6422 round_trippers.go:580]     Audit-Id: bd90e017-2525-4f49-8915-332cf5d0f12e
	I1212 12:17:13.049077    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:13.049083    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:13.049088    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:13.049092    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:13.049097    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:13 GMT
	I1212 12:17:13.049292    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"351","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I1212 12:17:13.547021    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:13.547037    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:13.547082    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:13.547088    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:13.548645    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:13.548655    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:13.548663    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:13 GMT
	I1212 12:17:13.548670    6422 round_trippers.go:580]     Audit-Id: 31d274f3-2b57-421e-8ce1-eca43e9f0d1b
	I1212 12:17:13.548678    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:13.548685    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:13.548693    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:13.548701    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:13.548851    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:13.549039    6422 node_ready.go:49] node "multinode-675000" has status "Ready":"True"
	I1212 12:17:13.549051    6422 node_ready.go:38] duration metric: took 9.015493459s waiting for node "multinode-675000" to be "Ready" ...
	I1212 12:17:13.549058    6422 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 12:17:13.549100    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:17:13.549106    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:13.549112    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:13.549118    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:13.551383    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:13.551391    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:13.551396    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:13.551405    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:13 GMT
	I1212 12:17:13.551414    6422 round_trippers.go:580]     Audit-Id: 0b699371-b9ad-4894-902a-434c885cd143
	I1212 12:17:13.551428    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:13.551437    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:13.551445    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:13.552125    6422 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"433","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53932 chars]
	I1212 12:17:13.554407    6422 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:13.554445    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:17:13.554450    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:13.554456    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:13.554462    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:13.555784    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:13.555793    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:13.555798    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:13.555803    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:13.555808    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:13 GMT
	I1212 12:17:13.555813    6422 round_trippers.go:580]     Audit-Id: 977f5706-6fa6-4291-b418-9020af6ec517
	I1212 12:17:13.555820    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:13.555825    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:13.555900    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"433","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 12:17:13.556144    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:13.556155    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:13.556161    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:13.556167    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:13.557323    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:13.557333    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:13.557338    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:13.557344    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:13.557349    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:13 GMT
	I1212 12:17:13.557353    6422 round_trippers.go:580]     Audit-Id: 35ef36b6-6035-4127-962a-98daa6e9519b
	I1212 12:17:13.557358    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:13.557374    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:13.557480    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:13.557679    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:17:13.557687    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:13.557692    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:13.557697    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:13.558986    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:13.558996    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:13.559006    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:13 GMT
	I1212 12:17:13.559015    6422 round_trippers.go:580]     Audit-Id: b82149dd-809d-4fbb-9ff4-c1215c15430b
	I1212 12:17:13.559022    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:13.559028    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:13.559032    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:13.559041    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:13.559175    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"433","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 12:17:13.559481    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:13.559489    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:13.559495    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:13.559500    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:13.560792    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:13.560800    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:13.560806    6422 round_trippers.go:580]     Audit-Id: 8d0eacc9-e6a5-41ed-958f-f2597aa9ae89
	I1212 12:17:13.560810    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:13.560815    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:13.560820    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:13.560824    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:13.560829    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:13 GMT
	I1212 12:17:13.561394    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:14.062584    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:17:14.062647    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:14.062655    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:14.062662    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:14.064449    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:14.064471    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:14.064480    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:14 GMT
	I1212 12:17:14.064486    6422 round_trippers.go:580]     Audit-Id: 9f0013e6-a55d-4a96-9943-b1a9574ae994
	I1212 12:17:14.064491    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:14.064496    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:14.064501    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:14.064505    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:14.064584    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"433","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 12:17:14.064920    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:14.064927    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:14.064933    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:14.064939    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:14.071174    6422 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 12:17:14.071188    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:14.071194    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:14.071199    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:14.071203    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:14 GMT
	I1212 12:17:14.071208    6422 round_trippers.go:580]     Audit-Id: f6ef68e0-9b54-48c9-88c0-58d9771ce727
	I1212 12:17:14.071212    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:14.071217    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:14.071325    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:14.562427    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:17:14.562447    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:14.562459    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:14.562469    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:14.565103    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:14.565124    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:14.565136    6422 round_trippers.go:580]     Audit-Id: e3f5ec0e-5da0-4066-8387-6fa014deeb32
	I1212 12:17:14.565148    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:14.565160    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:14.565167    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:14.565173    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:14.565179    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:14 GMT
	I1212 12:17:14.565390    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"433","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 12:17:14.565769    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:14.565776    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:14.565781    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:14.565787    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:14.567294    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:14.567313    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:14.567319    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:14.567327    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:14 GMT
	I1212 12:17:14.567336    6422 round_trippers.go:580]     Audit-Id: bd207364-8db0-4ca7-8242-d0696f7c41cb
	I1212 12:17:14.567343    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:14.567350    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:14.567356    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:14.567475    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:15.062067    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:17:15.062093    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.062105    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.062115    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.065214    6422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:17:15.065228    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.065236    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.065278    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.065290    6422 round_trippers.go:580]     Audit-Id: 2d99cb5d-d656-4bdf-af3d-6b8df3f29b68
	I1212 12:17:15.065297    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.065304    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.065314    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.065430    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"433","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1212 12:17:15.065807    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:15.065819    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.065829    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.065836    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.067735    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:15.067757    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.067768    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.067776    6422 round_trippers.go:580]     Audit-Id: 44ec4061-f842-482a-9a05-5b7c5b62e1e9
	I1212 12:17:15.067784    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.067791    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.067796    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.067801    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.067888    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:15.562017    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:17:15.562033    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.562041    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.562048    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.564474    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:15.564491    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.564500    6422 round_trippers.go:580]     Audit-Id: e3ff5bea-73d3-4e24-89e3-d8c5203182fa
	I1212 12:17:15.564507    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.564514    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.564522    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.564537    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.564570    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.564655    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"446","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I1212 12:17:15.565087    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:15.565095    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.565101    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.565107    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.566601    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:15.566612    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.566618    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.566623    6422 round_trippers.go:580]     Audit-Id: 31f8e5ef-ad1f-49b5-919b-6b02f62f1d81
	I1212 12:17:15.566630    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.566637    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.566642    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.566646    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.566734    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:15.566919    6422 pod_ready.go:92] pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace has status "Ready":"True"
	I1212 12:17:15.566928    6422 pod_ready.go:81] duration metric: took 2.012540637s waiting for pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:15.566934    6422 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:15.566985    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-675000
	I1212 12:17:15.566990    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.566996    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.567001    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.568327    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:15.568336    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.568341    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.568346    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.568351    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.568355    6422 round_trippers.go:580]     Audit-Id: 3a9b1ad1-4b26-429a-978f-4511f77cbc03
	I1212 12:17:15.568364    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.568375    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.568421    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-675000","namespace":"kube-system","uid":"bca57b7b-a960-4492-8f79-e6f8aa87f070","resourceVersion":"422","creationTimestamp":"2023-12-12T20:16:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"b8a6875b46c6a0a1242452e56d9fe808","kubernetes.io/config.mirror":"b8a6875b46c6a0a1242452e56d9fe808","kubernetes.io/config.seen":"2023-12-12T20:16:44.273254977Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I1212 12:17:15.568652    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:15.568659    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.568665    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.568670    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.569922    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:15.569938    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.569944    6422 round_trippers.go:580]     Audit-Id: 2808bc7f-7eb9-45fb-bce5-463f033132f5
	I1212 12:17:15.569948    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.569952    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.569964    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.569971    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.569976    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.570084    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:15.570258    6422 pod_ready.go:92] pod "etcd-multinode-675000" in "kube-system" namespace has status "Ready":"True"
	I1212 12:17:15.570266    6422 pod_ready.go:81] duration metric: took 3.326349ms waiting for pod "etcd-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:15.570273    6422 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:15.570303    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-675000
	I1212 12:17:15.570308    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.570314    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.570319    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.571597    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:15.571607    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.571613    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.571617    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.571622    6422 round_trippers.go:580]     Audit-Id: 1883280a-358f-4ff3-9dab-6d80f8c90a0e
	I1212 12:17:15.571628    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.571632    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.571639    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.571875    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-675000","namespace":"kube-system","uid":"8c377a02-06d4-44e2-a275-5a72e7917a90","resourceVersion":"423","creationTimestamp":"2023-12-12T20:16:51Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"a93d2462fc4179c4ac4fea222dfb096b","kubernetes.io/config.mirror":"a93d2462fc4179c4ac4fea222dfb096b","kubernetes.io/config.seen":"2023-12-12T20:16:51.301865289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I1212 12:17:15.572135    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:15.572142    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.572148    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.572154    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.573405    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:15.573413    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.573419    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.573426    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.573432    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.573437    6422 round_trippers.go:580]     Audit-Id: 2d2b9e89-d13d-43f9-9b08-22546cc95c30
	I1212 12:17:15.573441    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.573450    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.573618    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:15.573792    6422 pod_ready.go:92] pod "kube-apiserver-multinode-675000" in "kube-system" namespace has status "Ready":"True"
	I1212 12:17:15.573800    6422 pod_ready.go:81] duration metric: took 3.520819ms waiting for pod "kube-apiserver-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:15.573807    6422 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:15.573840    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-675000
	I1212 12:17:15.573845    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.573850    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.573866    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.575124    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:15.575135    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.575143    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.575155    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.575162    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.575168    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.575175    6422 round_trippers.go:580]     Audit-Id: dc60eb58-397b-4c53-9ed7-89bac9753642
	I1212 12:17:15.575181    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.575349    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-675000","namespace":"kube-system","uid":"d99bab41-1594-4f91-b6cf-63f143cbd1fb","resourceVersion":"421","creationTimestamp":"2023-12-12T20:16:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e889149f3645071732e65c53e76071e","kubernetes.io/config.mirror":"8e889149f3645071732e65c53e76071e","kubernetes.io/config.seen":"2023-12-12T20:16:51.301865920Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I1212 12:17:15.575595    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:15.575602    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.575608    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.575613    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.576740    6422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:17:15.576749    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.576754    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.576783    6422 round_trippers.go:580]     Audit-Id: 9ed82a10-656a-4337-a64b-f137a5d4f3fd
	I1212 12:17:15.576796    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.576801    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.576806    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.576812    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.576916    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:15.577109    6422 pod_ready.go:92] pod "kube-controller-manager-multinode-675000" in "kube-system" namespace has status "Ready":"True"
	I1212 12:17:15.577117    6422 pod_ready.go:81] duration metric: took 3.305394ms waiting for pod "kube-controller-manager-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:15.577125    6422 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q4dfx" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:15.747065    6422 request.go:629] Waited for 169.901818ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q4dfx
	I1212 12:17:15.747139    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q4dfx
	I1212 12:17:15.747150    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.747158    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.747166    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.749373    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:15.749386    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.749393    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:15 GMT
	I1212 12:17:15.749397    6422 round_trippers.go:580]     Audit-Id: 460c291a-3ab6-43d1-8068-d5c45303e0bf
	I1212 12:17:15.749402    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.749406    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.749410    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.749416    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.749582    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q4dfx","generateName":"kube-proxy-","namespace":"kube-system","uid":"2a62b5cc-b780-4ef5-8663-4a01ca0e2932","resourceVersion":"403","creationTimestamp":"2023-12-12T20:17:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5e692c0d-042c-458d-9e34-28feed1938bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e692c0d-042c-458d-9e34-28feed1938bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I1212 12:17:15.947705    6422 request.go:629] Waited for 197.826908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:15.947788    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:15.947799    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:15.947812    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:15.947822    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:15.950740    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:15.950754    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:15.950807    6422 round_trippers.go:580]     Audit-Id: f463918b-428f-4b11-9731-fec729af7520
	I1212 12:17:15.950843    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:15.950853    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:15.950865    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:15.950872    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:15.950878    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:16 GMT
	I1212 12:17:15.951044    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:15.951309    6422 pod_ready.go:92] pod "kube-proxy-q4dfx" in "kube-system" namespace has status "Ready":"True"
	I1212 12:17:15.951319    6422 pod_ready.go:81] duration metric: took 374.19473ms waiting for pod "kube-proxy-q4dfx" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:15.951328    6422 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:16.148894    6422 request.go:629] Waited for 197.522677ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-675000
	I1212 12:17:16.148976    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-675000
	I1212 12:17:16.148987    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:16.148999    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:16.149009    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:16.151702    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:16.151717    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:16.151725    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:16.151736    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:16.151743    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:16 GMT
	I1212 12:17:16.151750    6422 round_trippers.go:580]     Audit-Id: 6a12707d-11ca-4c2a-9738-c1831203771d
	I1212 12:17:16.151756    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:16.151762    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:16.151906    6422 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-675000","namespace":"kube-system","uid":"a51d1149-64de-4c6e-a8ae-d04d45097278","resourceVersion":"396","creationTimestamp":"2023-12-12T20:16:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"94c171efd7c72f0a76d945c5e6e993d1","kubernetes.io/config.mirror":"94c171efd7c72f0a76d945c5e6e993d1","kubernetes.io/config.seen":"2023-12-12T20:16:51.301860165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I1212 12:17:16.347398    6422 request.go:629] Waited for 195.196098ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:16.347495    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:17:16.347506    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:16.347518    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:16.347529    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:16.350862    6422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:17:16.350876    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:16.350884    6422 round_trippers.go:580]     Audit-Id: 289b1220-cc38-41e5-a42e-2b827a1c950b
	I1212 12:17:16.350890    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:16.350896    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:16.350903    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:16.350909    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:16.350919    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:16 GMT
	I1212 12:17:16.351051    6422 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I1212 12:17:16.351287    6422 pod_ready.go:92] pod "kube-scheduler-multinode-675000" in "kube-system" namespace has status "Ready":"True"
	I1212 12:17:16.351296    6422 pod_ready.go:81] duration metric: took 399.968437ms waiting for pod "kube-scheduler-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:17:16.351303    6422 pod_ready.go:38] duration metric: took 2.802273148s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 12:17:16.351321    6422 api_server.go:52] waiting for apiserver process to appear ...
	I1212 12:17:16.351372    6422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:17:16.360552    6422 command_runner.go:130] > 1896
	I1212 12:17:16.360649    6422 api_server.go:72] duration metric: took 11.940420653s to wait for apiserver process to appear ...
	I1212 12:17:16.360658    6422 api_server.go:88] waiting for apiserver healthz status ...
	I1212 12:17:16.360672    6422 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 12:17:16.364056    6422 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1212 12:17:16.364096    6422 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I1212 12:17:16.364102    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:16.364109    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:16.364115    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:16.364864    6422 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 12:17:16.364873    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:16.364879    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:16 GMT
	I1212 12:17:16.364886    6422 round_trippers.go:580]     Audit-Id: 4be812d9-7a13-4f94-b093-9265f3d6217b
	I1212 12:17:16.364893    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:16.364897    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:16.364902    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:16.364906    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:16.364917    6422 round_trippers.go:580]     Content-Length: 264
	I1212 12:17:16.364929    6422 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 12:17:16.364981    6422 api_server.go:141] control plane version: v1.28.4
	I1212 12:17:16.364989    6422 api_server.go:131] duration metric: took 4.327161ms to wait for apiserver health ...
	I1212 12:17:16.364996    6422 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 12:17:16.547267    6422 request.go:629] Waited for 182.232486ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:17:16.547372    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:17:16.547382    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:16.547393    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:16.547403    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:16.551027    6422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:17:16.551042    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:16.551050    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:16.551056    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:16.551063    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:16 GMT
	I1212 12:17:16.551070    6422 round_trippers.go:580]     Audit-Id: 5bbb9405-79ba-45a7-99fc-8706cf7fcef1
	I1212 12:17:16.551077    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:16.551083    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:16.551610    6422 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"446","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I1212 12:17:16.552857    6422 system_pods.go:59] 8 kube-system pods found
	I1212 12:17:16.552871    6422 system_pods.go:61] "coredns-5dd5756b68-2qgqq" [6bc47af7-f871-4daa-97ca-23500d80fc1b] Running
	I1212 12:17:16.552875    6422 system_pods.go:61] "etcd-multinode-675000" [bca57b7b-a960-4492-8f79-e6f8aa87f070] Running
	I1212 12:17:16.552879    6422 system_pods.go:61] "kindnet-4vq6m" [c528f3f9-a180-497c-892d-0305174740c9] Running
	I1212 12:17:16.552883    6422 system_pods.go:61] "kube-apiserver-multinode-675000" [8c377a02-06d4-44e2-a275-5a72e7917a90] Running
	I1212 12:17:16.552887    6422 system_pods.go:61] "kube-controller-manager-multinode-675000" [d99bab41-1594-4f91-b6cf-63f143cbd1fb] Running
	I1212 12:17:16.552891    6422 system_pods.go:61] "kube-proxy-q4dfx" [2a62b5cc-b780-4ef5-8663-4a01ca0e2932] Running
	I1212 12:17:16.552895    6422 system_pods.go:61] "kube-scheduler-multinode-675000" [a51d1149-64de-4c6e-a8ae-d04d45097278] Running
	I1212 12:17:16.552898    6422 system_pods.go:61] "storage-provisioner" [6f39d754-bc48-49e5-a0e4-fda2cbf521b7] Running
	I1212 12:17:16.552903    6422 system_pods.go:74] duration metric: took 187.905545ms to wait for pod list to return data ...
	I1212 12:17:16.552908    6422 default_sa.go:34] waiting for default service account to be created ...
	I1212 12:17:16.747248    6422 request.go:629] Waited for 194.283706ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I1212 12:17:16.747302    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I1212 12:17:16.747319    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:16.747390    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:16.747403    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:16.750061    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:16.750076    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:16.750083    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:16.750095    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:16.750103    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:16.750110    6422 round_trippers.go:580]     Content-Length: 261
	I1212 12:17:16.750117    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:16 GMT
	I1212 12:17:16.750133    6422 round_trippers.go:580]     Audit-Id: a55c2f2a-3742-4cdb-b474-e90e98b2e535
	I1212 12:17:16.750140    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:16.750155    6422 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0a0d85bc-4ea5-43ff-8744-122b952a826b","resourceVersion":"354","creationTimestamp":"2023-12-12T20:17:04Z"}}]}
	I1212 12:17:16.750349    6422 default_sa.go:45] found service account: "default"
	I1212 12:17:16.750360    6422 default_sa.go:55] duration metric: took 197.449571ms for default service account to be created ...
	I1212 12:17:16.750367    6422 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 12:17:16.947242    6422 request.go:629] Waited for 196.828354ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:17:16.947406    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:17:16.947422    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:16.947434    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:16.947446    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:16.951136    6422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:17:16.951154    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:16.951162    6422 round_trippers.go:580]     Audit-Id: 40ad4bc4-c03e-4f67-9ba2-cbbbc69a01cb
	I1212 12:17:16.951168    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:16.951175    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:16.951184    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:16.951199    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:16.951213    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:17 GMT
	I1212 12:17:16.951804    6422 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"446","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I1212 12:17:16.953120    6422 system_pods.go:86] 8 kube-system pods found
	I1212 12:17:16.953131    6422 system_pods.go:89] "coredns-5dd5756b68-2qgqq" [6bc47af7-f871-4daa-97ca-23500d80fc1b] Running
	I1212 12:17:16.953135    6422 system_pods.go:89] "etcd-multinode-675000" [bca57b7b-a960-4492-8f79-e6f8aa87f070] Running
	I1212 12:17:16.953142    6422 system_pods.go:89] "kindnet-4vq6m" [c528f3f9-a180-497c-892d-0305174740c9] Running
	I1212 12:17:16.953146    6422 system_pods.go:89] "kube-apiserver-multinode-675000" [8c377a02-06d4-44e2-a275-5a72e7917a90] Running
	I1212 12:17:16.953150    6422 system_pods.go:89] "kube-controller-manager-multinode-675000" [d99bab41-1594-4f91-b6cf-63f143cbd1fb] Running
	I1212 12:17:16.953153    6422 system_pods.go:89] "kube-proxy-q4dfx" [2a62b5cc-b780-4ef5-8663-4a01ca0e2932] Running
	I1212 12:17:16.953160    6422 system_pods.go:89] "kube-scheduler-multinode-675000" [a51d1149-64de-4c6e-a8ae-d04d45097278] Running
	I1212 12:17:16.953171    6422 system_pods.go:89] "storage-provisioner" [6f39d754-bc48-49e5-a0e4-fda2cbf521b7] Running
	I1212 12:17:16.953176    6422 system_pods.go:126] duration metric: took 202.80794ms to wait for k8s-apps to be running ...
	I1212 12:17:16.953181    6422 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 12:17:16.953225    6422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 12:17:16.961999    6422 system_svc.go:56] duration metric: took 8.813498ms WaitForService to wait for kubelet.
	I1212 12:17:16.962014    6422 kubeadm.go:581] duration metric: took 12.541795746s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 12:17:16.962029    6422 node_conditions.go:102] verifying NodePressure condition ...
	I1212 12:17:17.147374    6422 request.go:629] Waited for 185.238693ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I1212 12:17:17.147418    6422 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I1212 12:17:17.147426    6422 round_trippers.go:469] Request Headers:
	I1212 12:17:17.147439    6422 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:17:17.147448    6422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:17:17.150196    6422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:17:17.150214    6422 round_trippers.go:577] Response Headers:
	I1212 12:17:17.150224    6422 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:17:17.150236    6422 round_trippers.go:580]     Content-Type: application/json
	I1212 12:17:17.150245    6422 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:17:17.150256    6422 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:17:17.150264    6422 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:17:17 GMT
	I1212 12:17:17.150271    6422 round_trippers.go:580]     Audit-Id: f08a9ba1-7624-4ff9-ab92-3269ac0eeb8d
	I1212 12:17:17.150719    6422 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"427","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4834 chars]
	I1212 12:17:17.151022    6422 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 12:17:17.151044    6422 node_conditions.go:123] node cpu capacity is 2
	I1212 12:17:17.151064    6422 node_conditions.go:105] duration metric: took 189.033424ms to run NodePressure ...
	I1212 12:17:17.151074    6422 start.go:228] waiting for startup goroutines ...
	I1212 12:17:17.151081    6422 start.go:233] waiting for cluster config update ...
	I1212 12:17:17.151093    6422 start.go:242] writing updated cluster config ...
	I1212 12:17:17.151434    6422 ssh_runner.go:195] Run: rm -f paused
	I1212 12:17:17.190848    6422 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
	I1212 12:17:17.216768    6422 out.go:177] * Done! kubectl is now configured to use "multinode-675000" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2023-12-12 20:16:26 UTC, ends at Tue 2023-12-12 20:17:18 UTC. --
	Dec 12 20:17:05 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:05.020463042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:17:07 multinode-675000 cri-dockerd[1072]: time="2023-12-12T20:17:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c6f5291d5248b74fff19f9c077b9dc8bf2f8f29efca3f5948c91b843c8886d53/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 20:17:10 multinode-675000 cri-dockerd[1072]: time="2023-12-12T20:17:10Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20230809-80a64d96: Status: Downloaded newer image for kindest/kindnetd:v20230809-80a64d96"
	Dec 12 20:17:10 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:10.182008826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:17:10 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:10.182168500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:17:10 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:10.182231489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:17:10 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:10.182256580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:17:13 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:13.855347059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:17:13 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:13.855419600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:17:13 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:13.855437011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:17:13 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:13.855446984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:17:13 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:13.861470655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:17:13 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:13.861523717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:17:13 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:13.861540968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:17:13 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:13.861549889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:17:14 multinode-675000 cri-dockerd[1072]: time="2023-12-12T20:17:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6800e8084788d99302c5af2a76d218987ab9c236591a90ede32351961496d3b7/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 20:17:14 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:14.212754293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:17:14 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:14.212895481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:17:14 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:14.212923545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:17:14 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:14.212936812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:17:14 multinode-675000 cri-dockerd[1072]: time="2023-12-12T20:17:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/906956fbad371019ed93dffd7d99c74fc6e75c7aa8b72de6fe9c01e04f5fe24d/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 20:17:14 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:14.340684274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:17:14 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:14.340998987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:17:14 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:14.341134894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:17:14 multinode-675000 dockerd[1187]: time="2023-12-12T20:17:14.341219005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5139a190a0a70       ead0a4a53df89                                                                              4 seconds ago       Running             coredns                   0                   906956fbad371       coredns-5dd5756b68-2qgqq
	0b9a6a315baee       6e38f40d628db                                                                              4 seconds ago       Running             storage-provisioner       0                   6800e8084788d       storage-provisioner
	a391a1302e24d       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   8 seconds ago       Running             kindnet-cni               0                   c6f5291d5248b       kindnet-4vq6m
	5c4ec41a543b9       83f6cc407eed8                                                                              14 seconds ago      Running             kube-proxy                0                   c4d605b91fefd       kube-proxy-q4dfx
	ec1ccfe051cf8       e3db313c6dbc0                                                                              33 seconds ago      Running             kube-scheduler            0                   ec16ed8743035       kube-scheduler-multinode-675000
	0dfb53ca11626       73deb9a3f7025                                                                              33 seconds ago      Running             etcd                      0                   759eb904c17af       etcd-multinode-675000
	2e3863acd67e9       d058aa5ab969c                                                                              33 seconds ago      Running             kube-controller-manager   0                   5365eadc60c2d       kube-controller-manager-multinode-675000
	6a5980fcc6dc9       7fe0e6f37db33                                                                              33 seconds ago      Running             kube-apiserver            0                   32f46c3efb2c7       kube-apiserver-multinode-675000
	
	
	==> coredns [5139a190a0a7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39661 - 33289 "HINFO IN 696511843846326458.4911786665791153147. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.013651973s
	
	
	==> describe nodes <==
	Name:               multinode-675000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-675000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=multinode-675000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T12_16_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 20:16:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-675000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 20:17:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:17:13 +0000   Tue, 12 Dec 2023 20:16:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:17:13 +0000   Tue, 12 Dec 2023 20:16:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:17:13 +0000   Tue, 12 Dec 2023 20:16:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:17:13 +0000   Tue, 12 Dec 2023 20:17:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-675000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 7197e430404f4c6f8eae0cdc08635182
	  System UUID:                fbe411ee-0000-0000-b1fb-f01898ef957c
	  Boot ID:                    7e3afd3f-fd2a-4d14-b6fe-b0cfe9c1ffec
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-2qgqq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15s
	  kube-system                 etcd-multinode-675000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29s
	  kube-system                 kindnet-4vq6m                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16s
	  kube-system                 kube-apiserver-multinode-675000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-controller-manager-multinode-675000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-proxy-q4dfx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  kube-system                 kube-scheduler-multinode-675000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node multinode-675000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node multinode-675000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 35s)  kubelet          Node multinode-675000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node multinode-675000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node multinode-675000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node multinode-675000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16s                node-controller  Node multinode-675000 event: Registered Node multinode-675000 in Controller
	  Normal  NodeReady                6s                 kubelet          Node multinode-675000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.007129] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.258812] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.041816] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.890883] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.892506] systemd-fstab-generator[513]: Ignoring "noauto" for root device
	[  +0.093066] systemd-fstab-generator[524]: Ignoring "noauto" for root device
	[  +0.709240] systemd-fstab-generator[737]: Ignoring "noauto" for root device
	[  +0.213562] systemd-fstab-generator[774]: Ignoring "noauto" for root device
	[  +0.096924] systemd-fstab-generator[791]: Ignoring "noauto" for root device
	[  +0.113042] systemd-fstab-generator[829]: Ignoring "noauto" for root device
	[  +1.207569] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.119522] systemd-fstab-generator[986]: Ignoring "noauto" for root device
	[  +0.083016] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +0.091893] systemd-fstab-generator[1008]: Ignoring "noauto" for root device
	[  +0.103362] systemd-fstab-generator[1019]: Ignoring "noauto" for root device
	[  +0.114979] systemd-fstab-generator[1038]: Ignoring "noauto" for root device
	[  +5.288825] systemd-fstab-generator[1171]: Ignoring "noauto" for root device
	[  +1.279359] kauditd_printk_skb: 13 callbacks suppressed
	[  +3.989404] systemd-fstab-generator[1552]: Ignoring "noauto" for root device
	[  +7.188581] systemd-fstab-generator[2414]: Ignoring "noauto" for root device
	[Dec12 20:17] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.384810] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [0dfb53ca1162] <==
	{"level":"info","ts":"2023-12-12T20:16:46.130248Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"e0290fa3161c5471","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-12-12T20:16:46.130295Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T20:16:46.130311Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T20:16:46.130317Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T20:16:46.132752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2023-12-12T20:16:46.132938Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2023-12-12T20:16:46.212101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-12T20:16:46.212146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-12T20:16:46.212158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2023-12-12T20:16:46.212167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2023-12-12T20:16:46.212171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T20:16:46.212178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T20:16:46.212184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T20:16:46.21448Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.214911Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-675000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T20:16:46.215052Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.215333Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.215433Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.215068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:16:46.219011Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2023-12-12T20:16:46.215082Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:16:46.2198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T20:16:46.223336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T20:16:46.223372Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T20:16:48.80496Z","caller":"traceutil/trace.go:171","msg":"trace[1901849977] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"101.053217ms","start":"2023-12-12T20:16:48.703894Z","end":"2023-12-12T20:16:48.804947Z","steps":["trace[1901849977] 'process raft request'  (duration: 60.497972ms)","trace[1901849977] 'compare'  (duration: 40.496633ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:17:19 up 1 min,  0 users,  load average: 0.65, 0.17, 0.06
	Linux multinode-675000 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [a391a1302e24] <==
	I1212 20:17:10.309204       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 20:17:10.309282       1 main.go:107] hostIP = 192.169.0.13
	podIP = 192.169.0.13
	I1212 20:17:10.309416       1 main.go:116] setting mtu 1500 for CNI 
	I1212 20:17:10.309458       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 20:17:10.309478       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 20:17:10.609013       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:17:10.609049       1 main.go:227] handling current node
	
	
	==> kube-apiserver [6a5980fcc6dc] <==
	I1212 20:16:48.672910       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 20:16:48.676250       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 20:16:48.677484       1 controller.go:624] quota admission added evaluator for: namespaces
	I1212 20:16:48.682336       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 20:16:48.692100       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 20:16:48.692246       1 aggregator.go:166] initial CRD sync complete...
	I1212 20:16:48.692338       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 20:16:48.692381       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:16:48.692465       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:16:48.717331       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:16:49.575857       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1212 20:16:49.578538       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1212 20:16:49.578547       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:16:49.892209       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:16:49.918134       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:16:49.987524       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1212 20:16:49.994326       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I1212 20:16:49.995361       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 20:16:49.999720       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:16:50.690382       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 20:16:51.221140       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 20:16:51.230372       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1212 20:16:51.238669       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 20:17:03.591565       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1212 20:17:04.384602       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2e3863acd67e] <==
	I1212 20:17:03.597170       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1212 20:17:03.603638       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4vq6m"
	I1212 20:17:03.603673       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q4dfx"
	I1212 20:17:03.632240       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1212 20:17:03.633170       1 shared_informer.go:318] Caches are synced for endpoint
	I1212 20:17:03.720712       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:17:03.784064       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:17:04.112473       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:17:04.180891       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:17:04.180925       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 20:17:04.387915       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 20:17:04.569116       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 20:17:04.628590       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-7ddxh"
	I1212 20:17:04.642387       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2qgqq"
	I1212 20:17:04.666378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="278.873535ms"
	I1212 20:17:04.675935       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-7ddxh"
	I1212 20:17:04.686219       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.811252ms"
	I1212 20:17:04.690363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="4.119126ms"
	I1212 20:17:04.690483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.544µs"
	I1212 20:17:13.476941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.916µs"
	I1212 20:17:13.496255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.815µs"
	I1212 20:17:13.582289       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 20:17:15.437619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.389µs"
	I1212 20:17:15.456455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.706842ms"
	I1212 20:17:15.456719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.35µs"
	
	
	==> kube-proxy [5c4ec41a543b] <==
	I1212 20:17:05.147381       1 server_others.go:69] "Using iptables proxy"
	I1212 20:17:05.156379       1 node.go:141] Successfully retrieved node IP: 192.169.0.13
	I1212 20:17:05.209428       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 20:17:05.209444       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 20:17:05.221145       1 server_others.go:152] "Using iptables Proxier"
	I1212 20:17:05.221199       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 20:17:05.221326       1 server.go:846] "Version info" version="v1.28.4"
	I1212 20:17:05.221358       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:17:05.222088       1 config.go:315] "Starting node config controller"
	I1212 20:17:05.222117       1 config.go:188] "Starting service config controller"
	I1212 20:17:05.222123       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 20:17:05.222134       1 config.go:97] "Starting endpoint slice config controller"
	I1212 20:17:05.222136       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 20:17:05.224472       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 20:17:05.325561       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 20:17:05.325623       1 shared_informer.go:318] Caches are synced for service config
	I1212 20:17:05.325803       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ec1ccfe051cf] <==
	W1212 20:16:48.669576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 20:16:48.669614       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 20:16:48.670591       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 20:16:48.670629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 20:16:48.670761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 20:16:48.670797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 20:16:48.670806       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 20:16:48.670812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 20:16:48.670822       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 20:16:48.670827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 20:16:48.671884       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 20:16:48.671978       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:16:49.523847       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 20:16:49.523874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 20:16:49.553927       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 20:16:49.553950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 20:16:49.572215       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 20:16:49.572242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 20:16:49.611936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 20:16:49.612040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 20:16:49.643230       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 20:16:49.643284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 20:16:49.760509       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 20:16:49.760528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1212 20:16:49.954233       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 20:16:26 UTC, ends at Tue 2023-12-12 20:17:20 UTC. --
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: I1212 20:17:03.693887    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c528f3f9-a180-497c-892d-0305174740c9-xtables-lock\") pod \"kindnet-4vq6m\" (UID: \"c528f3f9-a180-497c-892d-0305174740c9\") " pod="kube-system/kindnet-4vq6m"
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: I1212 20:17:03.693948    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a62b5cc-b780-4ef5-8663-4a01ca0e2932-lib-modules\") pod \"kube-proxy-q4dfx\" (UID: \"2a62b5cc-b780-4ef5-8663-4a01ca0e2932\") " pod="kube-system/kube-proxy-q4dfx"
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: I1212 20:17:03.694089    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c528f3f9-a180-497c-892d-0305174740c9-lib-modules\") pod \"kindnet-4vq6m\" (UID: \"c528f3f9-a180-497c-892d-0305174740c9\") " pod="kube-system/kindnet-4vq6m"
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: I1212 20:17:03.694140    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a62b5cc-b780-4ef5-8663-4a01ca0e2932-xtables-lock\") pod \"kube-proxy-q4dfx\" (UID: \"2a62b5cc-b780-4ef5-8663-4a01ca0e2932\") " pod="kube-system/kube-proxy-q4dfx"
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: I1212 20:17:03.694234    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c528f3f9-a180-497c-892d-0305174740c9-cni-cfg\") pod \"kindnet-4vq6m\" (UID: \"c528f3f9-a180-497c-892d-0305174740c9\") " pod="kube-system/kindnet-4vq6m"
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: I1212 20:17:03.694286    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cxnd\" (UniqueName: \"kubernetes.io/projected/c528f3f9-a180-497c-892d-0305174740c9-kube-api-access-6cxnd\") pod \"kindnet-4vq6m\" (UID: \"c528f3f9-a180-497c-892d-0305174740c9\") " pod="kube-system/kindnet-4vq6m"
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: I1212 20:17:03.694385    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dckc9\" (UniqueName: \"kubernetes.io/projected/2a62b5cc-b780-4ef5-8663-4a01ca0e2932-kube-api-access-dckc9\") pod \"kube-proxy-q4dfx\" (UID: \"2a62b5cc-b780-4ef5-8663-4a01ca0e2932\") " pod="kube-system/kube-proxy-q4dfx"
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: E1212 20:17:03.802299    2433 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: E1212 20:17:03.802337    2433 projected.go:198] Error preparing data for projected volume kube-api-access-dckc9 for pod kube-system/kube-proxy-q4dfx: configmap "kube-root-ca.crt" not found
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: E1212 20:17:03.802412    2433 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a62b5cc-b780-4ef5-8663-4a01ca0e2932-kube-api-access-dckc9 podName:2a62b5cc-b780-4ef5-8663-4a01ca0e2932 nodeName:}" failed. No retries permitted until 2023-12-12 20:17:04.3023643 +0000 UTC m=+13.102513898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dckc9" (UniqueName: "kubernetes.io/projected/2a62b5cc-b780-4ef5-8663-4a01ca0e2932-kube-api-access-dckc9") pod "kube-proxy-q4dfx" (UID: "2a62b5cc-b780-4ef5-8663-4a01ca0e2932") : configmap "kube-root-ca.crt" not found
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: E1212 20:17:03.803660    2433 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: E1212 20:17:03.803736    2433 projected.go:198] Error preparing data for projected volume kube-api-access-6cxnd for pod kube-system/kindnet-4vq6m: configmap "kube-root-ca.crt" not found
	Dec 12 20:17:03 multinode-675000 kubelet[2433]: E1212 20:17:03.803813    2433 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c528f3f9-a180-497c-892d-0305174740c9-kube-api-access-6cxnd podName:c528f3f9-a180-497c-892d-0305174740c9 nodeName:}" failed. No retries permitted until 2023-12-12 20:17:04.303803621 +0000 UTC m=+13.103953219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6cxnd" (UniqueName: "kubernetes.io/projected/c528f3f9-a180-497c-892d-0305174740c9-kube-api-access-6cxnd") pod "kindnet-4vq6m" (UID: "c528f3f9-a180-497c-892d-0305174740c9") : configmap "kube-root-ca.crt" not found
	Dec 12 20:17:07 multinode-675000 kubelet[2433]: I1212 20:17:07.326875    2433 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f5291d5248b74fff19f9c077b9dc8bf2f8f29efca3f5948c91b843c8886d53"
	Dec 12 20:17:10 multinode-675000 kubelet[2433]: I1212 20:17:10.358318    2433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-q4dfx" podStartSLOduration=7.358292213 podCreationTimestamp="2023-12-12 20:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 20:17:05.446316549 +0000 UTC m=+14.246466149" watchObservedRunningTime="2023-12-12 20:17:10.358292213 +0000 UTC m=+19.158441820"
	Dec 12 20:17:11 multinode-675000 kubelet[2433]: I1212 20:17:11.365956    2433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4vq6m" podStartSLOduration=5.5623086520000005 podCreationTimestamp="2023-12-12 20:17:03 +0000 UTC" firstStartedPulling="2023-12-12 20:17:07.32776007 +0000 UTC m=+16.127909670" lastFinishedPulling="2023-12-12 20:17:10.131340845 +0000 UTC m=+18.931490442" observedRunningTime="2023-12-12 20:17:10.362468807 +0000 UTC m=+19.162618407" watchObservedRunningTime="2023-12-12 20:17:11.365889424 +0000 UTC m=+20.166039022"
	Dec 12 20:17:13 multinode-675000 kubelet[2433]: I1212 20:17:13.457922    2433 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 20:17:13 multinode-675000 kubelet[2433]: I1212 20:17:13.478117    2433 topology_manager.go:215] "Topology Admit Handler" podUID="6bc47af7-f871-4daa-97ca-23500d80fc1b" podNamespace="kube-system" podName="coredns-5dd5756b68-2qgqq"
	Dec 12 20:17:13 multinode-675000 kubelet[2433]: I1212 20:17:13.478203    2433 topology_manager.go:215] "Topology Admit Handler" podUID="6f39d754-bc48-49e5-a0e4-fda2cbf521b7" podNamespace="kube-system" podName="storage-provisioner"
	Dec 12 20:17:13 multinode-675000 kubelet[2433]: I1212 20:17:13.586392    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume\") pod \"coredns-5dd5756b68-2qgqq\" (UID: \"6bc47af7-f871-4daa-97ca-23500d80fc1b\") " pod="kube-system/coredns-5dd5756b68-2qgqq"
	Dec 12 20:17:13 multinode-675000 kubelet[2433]: I1212 20:17:13.586578    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxll\" (UniqueName: \"kubernetes.io/projected/6f39d754-bc48-49e5-a0e4-fda2cbf521b7-kube-api-access-2rxll\") pod \"storage-provisioner\" (UID: \"6f39d754-bc48-49e5-a0e4-fda2cbf521b7\") " pod="kube-system/storage-provisioner"
	Dec 12 20:17:13 multinode-675000 kubelet[2433]: I1212 20:17:13.586708    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnqcg\" (UniqueName: \"kubernetes.io/projected/6bc47af7-f871-4daa-97ca-23500d80fc1b-kube-api-access-bnqcg\") pod \"coredns-5dd5756b68-2qgqq\" (UID: \"6bc47af7-f871-4daa-97ca-23500d80fc1b\") " pod="kube-system/coredns-5dd5756b68-2qgqq"
	Dec 12 20:17:13 multinode-675000 kubelet[2433]: I1212 20:17:13.586823    2433 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6f39d754-bc48-49e5-a0e4-fda2cbf521b7-tmp\") pod \"storage-provisioner\" (UID: \"6f39d754-bc48-49e5-a0e4-fda2cbf521b7\") " pod="kube-system/storage-provisioner"
	Dec 12 20:17:15 multinode-675000 kubelet[2433]: I1212 20:17:15.441676    2433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2qgqq" podStartSLOduration=11.440209253999999 podCreationTimestamp="2023-12-12 20:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 20:17:15.438265483 +0000 UTC m=+24.238415089" watchObservedRunningTime="2023-12-12 20:17:15.440209254 +0000 UTC m=+24.240358856"
	Dec 12 20:17:15 multinode-675000 kubelet[2433]: I1212 20:17:15.442726    2433 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.442698655 podCreationTimestamp="2023-12-12 20:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-12 20:17:14.39015913 +0000 UTC m=+23.190308739" watchObservedRunningTime="2023-12-12 20:17:15.442698655 +0000 UTC m=+24.242848262"
	
	
	==> storage-provisioner [0b9a6a315bae] <==
	I1212 20:17:14.312616       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:17:14.322893       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:17:14.322983       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 20:17:14.329967       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:17:14.330645       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e6fbc79-a02a-4f5f-82d7-de5fe00a9d7b", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-675000_2f2ccc3c-bbc3-49dd-b895-ab7f450e9251 became leader
	I1212 20:17:14.330724       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-675000_2f2ccc3c-bbc3-49dd-b895-ab7f450e9251!
	I1212 20:17:14.432075       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-675000_2f2ccc3c-bbc3-49dd-b895-ab7f450e9251!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-675000 -n multinode-675000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-675000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (8.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 stop
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-675000 stop: (8.245124105s)
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-675000 status: exit status 7 (68.894986ms)

                                                
                                                
-- stdout --
	multinode-675000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-675000 status --alsologtostderr: exit status 7 (68.803067ms)

                                                
                                                
-- stdout --
	multinode-675000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:17:29.198012    6554 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:17:29.198224    6554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:17:29.198230    6554 out.go:309] Setting ErrFile to fd 2...
	I1212 12:17:29.198234    6554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:17:29.198431    6554 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:17:29.198621    6554 out.go:303] Setting JSON to false
	I1212 12:17:29.198644    6554 mustload.go:65] Loading cluster: multinode-675000
	I1212 12:17:29.198679    6554 notify.go:220] Checking for updates...
	I1212 12:17:29.198965    6554 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:17:29.198978    6554 status.go:255] checking status of multinode-675000 ...
	I1212 12:17:29.199429    6554 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:17:29.199473    6554 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:17:29.207848    6554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51407
	I1212 12:17:29.208290    6554 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:17:29.208712    6554 main.go:141] libmachine: Using API Version  1
	I1212 12:17:29.208723    6554 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:17:29.208933    6554 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:17:29.209041    6554 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:17:29.209134    6554 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:17:29.209202    6554 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6434
	I1212 12:17:29.210191    6554 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid 6434 missing from process table
	I1212 12:17:29.210227    6554 status.go:330] multinode-675000 host status = "Stopped" (err=<nil>)
	I1212 12:17:29.210250    6554 status.go:343] host is not running, skipping remaining checks
	I1212 12:17:29.210256    6554 status.go:257] multinode-675000 status: &{Name:multinode-675000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-675000 status --alsologtostderr": multinode-675000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-675000 status --alsologtostderr": multinode-675000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000: exit status 7 (67.995923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (8.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-675000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-675000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (51.003967323s)
multinode_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 status --alsologtostderr
multinode_test.go:394: status says both hosts are not running: args "out/minikube-darwin-amd64 -p multinode-675000 status --alsologtostderr": 
-- stdout --
	multinode-675000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:18:20.339562    6629 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:18:20.339860    6629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:18:20.339866    6629 out.go:309] Setting ErrFile to fd 2...
	I1212 12:18:20.339870    6629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:18:20.340054    6629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:18:20.340241    6629 out.go:303] Setting JSON to false
	I1212 12:18:20.340265    6629 mustload.go:65] Loading cluster: multinode-675000
	I1212 12:18:20.340299    6629 notify.go:220] Checking for updates...
	I1212 12:18:20.340577    6629 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:18:20.340588    6629 status.go:255] checking status of multinode-675000 ...
	I1212 12:18:20.340949    6629 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:20.340992    6629 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:20.349668    6629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51456
	I1212 12:18:20.350076    6629 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:20.350514    6629 main.go:141] libmachine: Using API Version  1
	I1212 12:18:20.350524    6629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:20.350707    6629 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:20.350829    6629 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:18:20.350910    6629 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:18:20.350974    6629 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6575
	I1212 12:18:20.352005    6629 status.go:330] multinode-675000 host status = "Running" (err=<nil>)
	I1212 12:18:20.352026    6629 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:18:20.352262    6629 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:20.352282    6629 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:20.360378    6629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51458
	I1212 12:18:20.360741    6629 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:20.361101    6629 main.go:141] libmachine: Using API Version  1
	I1212 12:18:20.361111    6629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:20.361310    6629 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:20.361425    6629 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:18:20.361516    6629 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:18:20.361767    6629 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:20.361800    6629 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:20.370527    6629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51460
	I1212 12:18:20.370884    6629 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:20.371226    6629 main.go:141] libmachine: Using API Version  1
	I1212 12:18:20.371236    6629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:20.371452    6629 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:20.371550    6629 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:18:20.371682    6629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 12:18:20.371702    6629 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:18:20.371785    6629 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:18:20.371873    6629 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:18:20.371942    6629 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:18:20.372069    6629 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:18:20.410829    6629 ssh_runner.go:195] Run: systemctl --version
	I1212 12:18:20.414808    6629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 12:18:20.423984    6629 kubeconfig.go:92] found "multinode-675000" server: "https://192.169.0.13:8443"
	I1212 12:18:20.424026    6629 api_server.go:166] Checking apiserver status ...
	I1212 12:18:20.424081    6629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:18:20.432839    6629 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1753/cgroup
	I1212 12:18:20.439518    6629 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda93d2462fc4179c4ac4fea222dfb096b/fb02933e38d84983047b7ffd44a869edbcdb966e0335749052e311e44efff800"
	I1212 12:18:20.439564    6629 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda93d2462fc4179c4ac4fea222dfb096b/fb02933e38d84983047b7ffd44a869edbcdb966e0335749052e311e44efff800/freezer.state
	I1212 12:18:20.446524    6629 api_server.go:204] freezer state: "THAWED"
	I1212 12:18:20.446558    6629 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 12:18:20.450700    6629 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1212 12:18:20.450716    6629 status.go:421] multinode-675000 apiserver status = Running (err=<nil>)
	I1212 12:18:20.450724    6629 status.go:257] multinode-675000 status: &{Name:multinode-675000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:398: status says both kubelets are not running: args "out/minikube-darwin-amd64 -p multinode-675000 status --alsologtostderr": 
-- stdout --
	multinode-675000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:18:20.339562    6629 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:18:20.339860    6629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:18:20.339866    6629 out.go:309] Setting ErrFile to fd 2...
	I1212 12:18:20.339870    6629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:18:20.340054    6629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:18:20.340241    6629 out.go:303] Setting JSON to false
	I1212 12:18:20.340265    6629 mustload.go:65] Loading cluster: multinode-675000
	I1212 12:18:20.340299    6629 notify.go:220] Checking for updates...
	I1212 12:18:20.340577    6629 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:18:20.340588    6629 status.go:255] checking status of multinode-675000 ...
	I1212 12:18:20.340949    6629 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:20.340992    6629 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:20.349668    6629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51456
	I1212 12:18:20.350076    6629 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:20.350514    6629 main.go:141] libmachine: Using API Version  1
	I1212 12:18:20.350524    6629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:20.350707    6629 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:20.350829    6629 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:18:20.350910    6629 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:18:20.350974    6629 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6575
	I1212 12:18:20.352005    6629 status.go:330] multinode-675000 host status = "Running" (err=<nil>)
	I1212 12:18:20.352026    6629 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:18:20.352262    6629 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:20.352282    6629 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:20.360378    6629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51458
	I1212 12:18:20.360741    6629 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:20.361101    6629 main.go:141] libmachine: Using API Version  1
	I1212 12:18:20.361111    6629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:20.361310    6629 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:20.361425    6629 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:18:20.361516    6629 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:18:20.361767    6629 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:20.361800    6629 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:20.370527    6629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51460
	I1212 12:18:20.370884    6629 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:20.371226    6629 main.go:141] libmachine: Using API Version  1
	I1212 12:18:20.371236    6629 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:20.371452    6629 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:20.371550    6629 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:18:20.371682    6629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 12:18:20.371702    6629 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:18:20.371785    6629 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:18:20.371873    6629 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:18:20.371942    6629 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:18:20.372069    6629 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:18:20.410829    6629 ssh_runner.go:195] Run: systemctl --version
	I1212 12:18:20.414808    6629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 12:18:20.423984    6629 kubeconfig.go:92] found "multinode-675000" server: "https://192.169.0.13:8443"
	I1212 12:18:20.424026    6629 api_server.go:166] Checking apiserver status ...
	I1212 12:18:20.424081    6629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:18:20.432839    6629 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1753/cgroup
	I1212 12:18:20.439518    6629 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/poda93d2462fc4179c4ac4fea222dfb096b/fb02933e38d84983047b7ffd44a869edbcdb966e0335749052e311e44efff800"
	I1212 12:18:20.439564    6629 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda93d2462fc4179c4ac4fea222dfb096b/fb02933e38d84983047b7ffd44a869edbcdb966e0335749052e311e44efff800/freezer.state
	I1212 12:18:20.446524    6629 api_server.go:204] freezer state: "THAWED"
	I1212 12:18:20.446558    6629 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 12:18:20.450700    6629 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1212 12:18:20.450716    6629 status.go:421] multinode-675000 apiserver status = Running (err=<nil>)
	I1212 12:18:20.450724    6629 status.go:257] multinode-675000 status: &{Name:multinode-675000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:415: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-675000 logs -n 25: (3.692493521s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:15 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:15 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- exec          | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | -- nslookup kubernetes.io            |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- exec          | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | -- nslookup kubernetes.default       |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000                  | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | -- exec  -- nslookup                 |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| node    | add -p multinode-675000 -v 3         | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-675000 node stop m03       | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	| node    | multinode-675000 node start          | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | m03 --alsologtostderr                |                  |         |         |                     |                     |
	| node    | list -p multinode-675000             | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	| stop    | -p multinode-675000                  | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST | 12 Dec 23 12:16 PST |
	| start   | -p multinode-675000                  | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:16 PST | 12 Dec 23 12:17 PST |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | list -p multinode-675000             | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:17 PST |                     |
	| node    | multinode-675000 node delete         | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:17 PST |                     |
	|         | m03                                  |                  |         |         |                     |                     |
	| stop    | multinode-675000 stop                | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:17 PST | 12 Dec 23 12:17 PST |
	| start   | -p multinode-675000                  | multinode-675000 | jenkins | v1.32.0 | 12 Dec 23 12:17 PST | 12 Dec 23 12:18 PST |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	|         | --driver=hyperkit                    |                  |         |         |                     |                     |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 12:17:29
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 12:17:29.333957    6560 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:17:29.334185    6560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:17:29.334192    6560 out.go:309] Setting ErrFile to fd 2...
	I1212 12:17:29.334196    6560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:17:29.334404    6560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:17:29.335897    6560 out.go:303] Setting JSON to false
	I1212 12:17:29.358707    6560 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2820,"bootTime":1702409429,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 12:17:29.358824    6560 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 12:17:29.380436    6560 out.go:177] * [multinode-675000] minikube v1.32.0 on Darwin 14.2
	I1212 12:17:29.443969    6560 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 12:17:29.423195    6560 notify.go:220] Checking for updates...
	I1212 12:17:29.486019    6560 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:17:29.506981    6560 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 12:17:29.527777    6560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 12:17:29.548930    6560 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:17:29.569925    6560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 12:17:29.591436    6560 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:17:29.592084    6560 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:17:29.592158    6560 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:17:29.601625    6560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51413
	I1212 12:17:29.602014    6560 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:17:29.602456    6560 main.go:141] libmachine: Using API Version  1
	I1212 12:17:29.602465    6560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:17:29.602679    6560 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:17:29.602824    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:29.603176    6560 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 12:17:29.603474    6560 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:17:29.603498    6560 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:17:29.611533    6560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51415
	I1212 12:17:29.611855    6560 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:17:29.612214    6560 main.go:141] libmachine: Using API Version  1
	I1212 12:17:29.612231    6560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:17:29.612493    6560 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:17:29.612617    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:29.641876    6560 out.go:177] * Using the hyperkit driver based on existing profile
	I1212 12:17:29.683797    6560 start.go:298] selected driver: hyperkit
	I1212 12:17:29.683821    6560 start.go:902] validating driver "hyperkit" against &{Name:multinode-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:17:29.683996    6560 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 12:17:29.684204    6560 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:17:29.684345    6560 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17734-1975/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 12:17:29.693459    6560 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 12:17:29.697327    6560 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:17:29.697353    6560 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 12:17:29.700078    6560 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 12:17:29.700153    6560 cni.go:84] Creating CNI manager for ""
	I1212 12:17:29.700162    6560 cni.go:136] 1 nodes found, recommending kindnet
	I1212 12:17:29.700174    6560 start_flags.go:323] config:
	{Name:multinode-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-675000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:17:29.700341    6560 iso.go:125] acquiring lock: {Name:mkd640d41cda61c79a7d2c2e38355d745b556a2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:17:29.743028    6560 out.go:177] * Starting control plane node multinode-675000 in cluster multinode-675000
	I1212 12:17:29.763772    6560 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 12:17:29.763824    6560 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 12:17:29.763848    6560 cache.go:56] Caching tarball of preloaded images
	I1212 12:17:29.764000    6560 preload.go:174] Found /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 12:17:29.764027    6560 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 12:17:29.764167    6560 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/config.json ...
	I1212 12:17:29.765056    6560 start.go:365] acquiring machines lock for multinode-675000: {Name:mkcfb9a2794178bbcff953e64f7f6a3e3b1e9997 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 12:17:29.765133    6560 start.go:369] acquired machines lock for "multinode-675000" in 59.34µs
	I1212 12:17:29.765158    6560 start.go:96] Skipping create...Using existing machine configuration
	I1212 12:17:29.765168    6560 fix.go:54] fixHost starting: 
	I1212 12:17:29.765454    6560 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:17:29.765485    6560 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:17:29.773933    6560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51417
	I1212 12:17:29.774442    6560 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:17:29.775026    6560 main.go:141] libmachine: Using API Version  1
	I1212 12:17:29.775038    6560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:17:29.775406    6560 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:17:29.775621    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:29.775728    6560 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:17:29.775814    6560 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:17:29.775879    6560 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6434
	I1212 12:17:29.776925    6560 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid 6434 missing from process table
	I1212 12:17:29.776970    6560 fix.go:102] recreateIfNeeded on multinode-675000: state=Stopped err=<nil>
	I1212 12:17:29.776990    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	W1212 12:17:29.777074    6560 fix.go:128] unexpected machine state, will restart: <nil>
	I1212 12:17:29.819006    6560 out.go:177] * Restarting existing hyperkit VM for "multinode-675000" ...
	I1212 12:17:29.841710    6560 main.go:141] libmachine: (multinode-675000) Calling .Start
	I1212 12:17:29.841954    6560 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:17:29.841999    6560 main.go:141] libmachine: (multinode-675000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/hyperkit.pid
	I1212 12:17:29.843898    6560 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid 6434 missing from process table
	I1212 12:17:29.843914    6560 main.go:141] libmachine: (multinode-675000) DBG | pid 6434 is in state "Stopped"
	I1212 12:17:29.843930    6560 main.go:141] libmachine: (multinode-675000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/hyperkit.pid...
	I1212 12:17:29.844122    6560 main.go:141] libmachine: (multinode-675000) DBG | Using UUID fbe44634-992a-11ee-b1fb-f01898ef957c
	I1212 12:17:29.974752    6560 main.go:141] libmachine: (multinode-675000) DBG | Generated MAC 6:ed:17:4f:83:b2
	I1212 12:17:29.974784    6560 main.go:141] libmachine: (multinode-675000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000
	I1212 12:17:29.974946    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:29 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fbe44634-992a-11ee-b1fb-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003e9740)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 12:17:29.974978    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:29 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fbe44634-992a-11ee-b1fb-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003e9740)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I1212 12:17:29.975070    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:29 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fbe44634-992a-11ee-b1fb-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/multinode-675000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/tty,log=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage,/Users/jenkins/minikube-integration/1773
4-1975/.minikube/machines/multinode-675000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000"}
	I1212 12:17:29.975119    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:29 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fbe44634-992a-11ee-b1fb-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/multinode-675000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/tty,log=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/console-ring -f kexec,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/bzimage,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000"
	I1212 12:17:29.975142    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:29 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1212 12:17:29.976927    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:29 DEBUG: hyperkit: Pid is 6575
	I1212 12:17:29.977334    6560 main.go:141] libmachine: (multinode-675000) DBG | Attempt 0
	I1212 12:17:29.977365    6560 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:17:29.977485    6560 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6575
	I1212 12:17:29.979354    6560 main.go:141] libmachine: (multinode-675000) DBG | Searching for 6:ed:17:4f:83:b2 in /var/db/dhcpd_leases ...
	I1212 12:17:29.979426    6560 main.go:141] libmachine: (multinode-675000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I1212 12:17:29.979452    6560 main.go:141] libmachine: (multinode-675000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a111b}
	I1212 12:17:29.979465    6560 main.go:141] libmachine: (multinode-675000) DBG | Found match: 6:ed:17:4f:83:b2
	I1212 12:17:29.979474    6560 main.go:141] libmachine: (multinode-675000) DBG | IP: 192.169.0.13
	I1212 12:17:29.979554    6560 main.go:141] libmachine: (multinode-675000) Calling .GetConfigRaw
	I1212 12:17:29.980207    6560 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:17:29.980361    6560 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/config.json ...
	I1212 12:17:29.980749    6560 machine.go:88] provisioning docker machine ...
	I1212 12:17:29.980770    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:29.980922    6560 main.go:141] libmachine: (multinode-675000) Calling .GetMachineName
	I1212 12:17:29.981043    6560 buildroot.go:166] provisioning hostname "multinode-675000"
	I1212 12:17:29.981061    6560 main.go:141] libmachine: (multinode-675000) Calling .GetMachineName
	I1212 12:17:29.981202    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:29.981360    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:29.981452    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:29.981533    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:29.981666    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:29.981862    6560 main.go:141] libmachine: Using SSH client type: native
	I1212 12:17:29.982198    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:17:29.982210    6560 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-675000 && echo "multinode-675000" | sudo tee /etc/hostname
	I1212 12:17:29.985919    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:29 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1212 12:17:30.046328    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1212 12:17:30.047211    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 12:17:30.047236    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 12:17:30.047250    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 12:17:30.047264    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 12:17:30.421703    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1212 12:17:30.421733    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1212 12:17:30.525832    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 12:17:30.525849    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 12:17:30.525862    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 12:17:30.525901    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 12:17:30.526779    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1212 12:17:30.526790    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1212 12:17:35.538018    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:35 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1212 12:17:35.538103    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:35 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1212 12:17:35.538140    6560 main.go:141] libmachine: (multinode-675000) DBG | 2023/12/12 12:17:35 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1212 12:17:41.061680    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-675000
	
	I1212 12:17:41.061697    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:41.061829    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:41.061926    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:41.062037    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:41.062117    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:41.062281    6560 main.go:141] libmachine: Using SSH client type: native
	I1212 12:17:41.062526    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:17:41.062538    6560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-675000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-675000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-675000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 12:17:41.134835    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 12:17:41.134853    6560 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17734-1975/.minikube CaCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17734-1975/.minikube}
	I1212 12:17:41.134865    6560 buildroot.go:174] setting up certificates
	I1212 12:17:41.134875    6560 provision.go:83] configureAuth start
	I1212 12:17:41.134902    6560 main.go:141] libmachine: (multinode-675000) Calling .GetMachineName
	I1212 12:17:41.135062    6560 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:17:41.135162    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:41.135250    6560 provision.go:138] copyHostCerts
	I1212 12:17:41.135277    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem
	I1212 12:17:41.135322    6560 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem, removing ...
	I1212 12:17:41.135330    6560 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem
	I1212 12:17:41.135525    6560 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem (1078 bytes)
	I1212 12:17:41.135779    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem
	I1212 12:17:41.135808    6560 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem, removing ...
	I1212 12:17:41.135813    6560 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem
	I1212 12:17:41.135879    6560 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem (1123 bytes)
	I1212 12:17:41.136048    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem
	I1212 12:17:41.136079    6560 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem, removing ...
	I1212 12:17:41.136084    6560 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem
	I1212 12:17:41.136158    6560 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem (1675 bytes)
	I1212 12:17:41.136312    6560 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem org=jenkins.multinode-675000 san=[192.169.0.13 192.169.0.13 localhost 127.0.0.1 minikube multinode-675000]
	I1212 12:17:41.215776    6560 provision.go:172] copyRemoteCerts
	I1212 12:17:41.215831    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 12:17:41.215851    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:41.215999    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:41.216113    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:41.216206    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:41.216316    6560 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:17:41.255621    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 12:17:41.255676    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 12:17:41.272031    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 12:17:41.272086    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 12:17:41.289290    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 12:17:41.289343    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 12:17:41.306127    6560 provision.go:86] duration metric: configureAuth took 171.242221ms
	I1212 12:17:41.306141    6560 buildroot.go:189] setting minikube options for container-runtime
	I1212 12:17:41.306263    6560 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:17:41.306279    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:41.306411    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:41.306530    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:41.306619    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:41.306712    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:41.306806    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:41.306910    6560 main.go:141] libmachine: Using SSH client type: native
	I1212 12:17:41.307143    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:17:41.307152    6560 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 12:17:41.375463    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 12:17:41.375478    6560 buildroot.go:70] root file system type: tmpfs
	I1212 12:17:41.375607    6560 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 12:17:41.375623    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:41.375827    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:41.375954    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:41.376099    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:41.376215    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:41.376354    6560 main.go:141] libmachine: Using SSH client type: native
	I1212 12:17:41.376614    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:17:41.376701    6560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 12:17:41.453616    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 12:17:41.453642    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:41.453783    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:41.453883    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:41.453961    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:41.454056    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:41.454180    6560 main.go:141] libmachine: Using SSH client type: native
	I1212 12:17:41.454429    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:17:41.454444    6560 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 12:17:42.063340    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 12:17:42.063360    6560 machine.go:91] provisioned docker machine in 12.082779676s
	I1212 12:17:42.063369    6560 start.go:300] post-start starting for "multinode-675000" (driver="hyperkit")
	I1212 12:17:42.063378    6560 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 12:17:42.063394    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:42.063574    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 12:17:42.063585    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:42.063673    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:42.063757    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:42.063862    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:42.063946    6560 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:17:42.103923    6560 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 12:17:42.106461    6560 command_runner.go:130] > NAME=Buildroot
	I1212 12:17:42.106471    6560 command_runner.go:130] > VERSION=2021.02.12-1-g0ec83c8-dirty
	I1212 12:17:42.106478    6560 command_runner.go:130] > ID=buildroot
	I1212 12:17:42.106482    6560 command_runner.go:130] > VERSION_ID=2021.02.12
	I1212 12:17:42.106487    6560 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1212 12:17:42.106596    6560 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 12:17:42.106611    6560 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/addons for local assets ...
	I1212 12:17:42.106685    6560 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/files for local assets ...
	I1212 12:17:42.106824    6560 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> 31982.pem in /etc/ssl/certs
	I1212 12:17:42.106830    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> /etc/ssl/certs/31982.pem
	I1212 12:17:42.106988    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 12:17:42.113153    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem --> /etc/ssl/certs/31982.pem (1708 bytes)
	I1212 12:17:42.129207    6560 start.go:303] post-start completed in 65.829566ms
	I1212 12:17:42.129223    6560 fix.go:56] fixHost completed within 12.364240333s
	I1212 12:17:42.129236    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:42.129381    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:42.129491    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:42.129577    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:42.129682    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:42.129815    6560 main.go:141] libmachine: Using SSH client type: native
	I1212 12:17:42.130145    6560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I1212 12:17:42.130153    6560 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 12:17:42.196934    6560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702412262.266006367
	
	I1212 12:17:42.196945    6560 fix.go:206] guest clock: 1702412262.266006367
	I1212 12:17:42.196950    6560 fix.go:219] Guest: 2023-12-12 12:17:42.266006367 -0800 PST Remote: 2023-12-12 12:17:42.129225 -0800 PST m=+12.840391540 (delta=136.781367ms)
	I1212 12:17:42.196973    6560 fix.go:190] guest clock delta is within tolerance: 136.781367ms
	I1212 12:17:42.196977    6560 start.go:83] releasing machines lock for "multinode-675000", held for 12.432018567s
	I1212 12:17:42.197000    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:42.197132    6560 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:17:42.197229    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:42.197519    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:42.197611    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:17:42.197685    6560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 12:17:42.197722    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:42.197761    6560 ssh_runner.go:195] Run: cat /version.json
	I1212 12:17:42.197781    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:17:42.197835    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:42.197875    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:17:42.197943    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:42.197987    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:17:42.198038    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:42.198082    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:17:42.198116    6560 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:17:42.198159    6560 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:17:42.234530    6560 command_runner.go:130] > {"iso_version": "v1.32.1-1701996673-17738", "kicbase_version": "v0.0.42-1701974066-17719", "minikube_version": "v1.32.0", "commit": "2518fadffa02a308edcd7fa670f350a21819c5e4"}
	I1212 12:17:42.234764    6560 ssh_runner.go:195] Run: systemctl --version
	I1212 12:17:42.288761    6560 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 12:17:42.289408    6560 command_runner.go:130] > systemd 247 (247)
	I1212 12:17:42.289422    6560 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1212 12:17:42.289480    6560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 12:17:42.293008    6560 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 12:17:42.293148    6560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 12:17:42.293205    6560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 12:17:42.303642    6560 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1212 12:17:42.303668    6560 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 12:17:42.303678    6560 start.go:475] detecting cgroup driver to use...
	I1212 12:17:42.303791    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:17:42.316670    6560 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1212 12:17:42.316967    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 12:17:42.323345    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 12:17:42.329972    6560 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 12:17:42.330069    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 12:17:42.337116    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:17:42.344218    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 12:17:42.351072    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:17:42.357927    6560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 12:17:42.364682    6560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 12:17:42.371384    6560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 12:17:42.377368    6560 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 12:17:42.377454    6560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 12:17:42.383754    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:17:42.465781    6560 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 12:17:42.478237    6560 start.go:475] detecting cgroup driver to use...
	I1212 12:17:42.478307    6560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 12:17:42.490408    6560 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1212 12:17:42.490875    6560 command_runner.go:130] > [Unit]
	I1212 12:17:42.490903    6560 command_runner.go:130] > Description=Docker Application Container Engine
	I1212 12:17:42.490908    6560 command_runner.go:130] > Documentation=https://docs.docker.com
	I1212 12:17:42.490913    6560 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1212 12:17:42.490918    6560 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1212 12:17:42.490936    6560 command_runner.go:130] > StartLimitBurst=3
	I1212 12:17:42.490943    6560 command_runner.go:130] > StartLimitIntervalSec=60
	I1212 12:17:42.490947    6560 command_runner.go:130] > [Service]
	I1212 12:17:42.490952    6560 command_runner.go:130] > Type=notify
	I1212 12:17:42.490958    6560 command_runner.go:130] > Restart=on-failure
	I1212 12:17:42.490965    6560 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1212 12:17:42.490985    6560 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1212 12:17:42.490991    6560 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1212 12:17:42.490997    6560 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1212 12:17:42.491003    6560 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1212 12:17:42.491009    6560 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1212 12:17:42.491015    6560 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1212 12:17:42.491031    6560 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1212 12:17:42.491051    6560 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1212 12:17:42.491076    6560 command_runner.go:130] > ExecStart=
	I1212 12:17:42.491090    6560 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I1212 12:17:42.491096    6560 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1212 12:17:42.491103    6560 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1212 12:17:42.491109    6560 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1212 12:17:42.491116    6560 command_runner.go:130] > LimitNOFILE=infinity
	I1212 12:17:42.491121    6560 command_runner.go:130] > LimitNPROC=infinity
	I1212 12:17:42.491125    6560 command_runner.go:130] > LimitCORE=infinity
	I1212 12:17:42.491130    6560 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1212 12:17:42.491136    6560 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1212 12:17:42.491146    6560 command_runner.go:130] > TasksMax=infinity
	I1212 12:17:42.491151    6560 command_runner.go:130] > TimeoutStartSec=0
	I1212 12:17:42.491156    6560 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1212 12:17:42.491160    6560 command_runner.go:130] > Delegate=yes
	I1212 12:17:42.491166    6560 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1212 12:17:42.491172    6560 command_runner.go:130] > KillMode=process
	I1212 12:17:42.491176    6560 command_runner.go:130] > [Install]
	I1212 12:17:42.491186    6560 command_runner.go:130] > WantedBy=multi-user.target
	I1212 12:17:42.491561    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:17:42.505858    6560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 12:17:42.521121    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:17:42.530123    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:17:42.539005    6560 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 12:17:42.576214    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:17:42.585561    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:17:42.597683    6560 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1212 12:17:42.598053    6560 ssh_runner.go:195] Run: which cri-dockerd
	I1212 12:17:42.600319    6560 command_runner.go:130] > /usr/bin/cri-dockerd
	I1212 12:17:42.600496    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 12:17:42.606405    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 12:17:42.617527    6560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 12:17:42.715517    6560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 12:17:42.809840    6560 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 12:17:42.809932    6560 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 12:17:42.822264    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:17:42.913340    6560 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 12:17:44.210140    6560 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2967948s)
	I1212 12:17:44.210197    6560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:17:44.292851    6560 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 12:17:44.376553    6560 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:17:44.468122    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:17:44.561104    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 12:17:44.573419    6560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:17:44.669106    6560 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 12:17:44.722542    6560 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 12:17:44.722617    6560 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 12:17:44.726326    6560 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1212 12:17:44.726341    6560 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 12:17:44.726348    6560 command_runner.go:130] > Device: 16h/22d	Inode: 875         Links: 1
	I1212 12:17:44.726356    6560 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1212 12:17:44.726364    6560 command_runner.go:130] > Access: 2023-12-12 20:17:44.749909574 +0000
	I1212 12:17:44.726370    6560 command_runner.go:130] > Modify: 2023-12-12 20:17:44.749909574 +0000
	I1212 12:17:44.726374    6560 command_runner.go:130] > Change: 2023-12-12 20:17:44.751909574 +0000
	I1212 12:17:44.726378    6560 command_runner.go:130] >  Birth: -
	I1212 12:17:44.726456    6560 start.go:543] Will wait 60s for crictl version
	I1212 12:17:44.726526    6560 ssh_runner.go:195] Run: which crictl
	I1212 12:17:44.732475    6560 command_runner.go:130] > /usr/bin/crictl
	I1212 12:17:44.732673    6560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 12:17:44.766469    6560 command_runner.go:130] > Version:  0.1.0
	I1212 12:17:44.766534    6560 command_runner.go:130] > RuntimeName:  docker
	I1212 12:17:44.766591    6560 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1212 12:17:44.766810    6560 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 12:17:44.767929    6560 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 12:17:44.767999    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 12:17:44.785319    6560 command_runner.go:130] > 24.0.7
	I1212 12:17:44.786252    6560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 12:17:44.802757    6560 command_runner.go:130] > 24.0.7
	I1212 12:17:44.849672    6560 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 12:17:44.849732    6560 main.go:141] libmachine: (multinode-675000) Calling .GetIP
	I1212 12:17:44.850137    6560 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1212 12:17:44.854426    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 12:17:44.863145    6560 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 12:17:44.863200    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 12:17:44.875313    6560 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 12:17:44.875327    6560 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 12:17:44.875332    6560 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 12:17:44.875336    6560 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 12:17:44.875339    6560 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1212 12:17:44.875344    6560 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 12:17:44.875348    6560 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 12:17:44.875352    6560 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 12:17:44.875357    6560 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 12:17:44.875908    6560 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 12:17:44.875924    6560 docker.go:601] Images already preloaded, skipping extraction
	I1212 12:17:44.876001    6560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 12:17:44.889297    6560 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1212 12:17:44.889311    6560 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1212 12:17:44.889315    6560 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1212 12:17:44.889320    6560 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1212 12:17:44.889324    6560 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1212 12:17:44.889328    6560 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1212 12:17:44.889337    6560 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1212 12:17:44.889342    6560 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1212 12:17:44.889347    6560 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 12:17:44.889936    6560 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 12:17:44.889954    6560 cache_images.go:84] Images are preloaded, skipping loading
	I1212 12:17:44.890013    6560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 12:17:44.908484    6560 command_runner.go:130] > cgroupfs
	I1212 12:17:44.908980    6560 cni.go:84] Creating CNI manager for ""
	I1212 12:17:44.908989    6560 cni.go:136] 1 nodes found, recommending kindnet
	I1212 12:17:44.909002    6560 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 12:17:44.909019    6560 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-675000 NodeName:multinode-675000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 12:17:44.909106    6560 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-675000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 12:17:44.909157    6560 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-675000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 12:17:44.909212    6560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 12:17:44.914936    6560 command_runner.go:130] > kubeadm
	I1212 12:17:44.914944    6560 command_runner.go:130] > kubectl
	I1212 12:17:44.914947    6560 command_runner.go:130] > kubelet
	I1212 12:17:44.915162    6560 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 12:17:44.915201    6560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 12:17:44.920919    6560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1212 12:17:44.932229    6560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 12:17:44.943753    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1212 12:17:44.955124    6560 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I1212 12:17:44.957457    6560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 12:17:44.965660    6560 certs.go:56] Setting up /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000 for IP: 192.169.0.13
	I1212 12:17:44.965677    6560 certs.go:190] acquiring lock for shared ca certs: {Name:mk3a28fc3e7d169ec96b49a3f31bfa6edcaf7ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:17:44.965814    6560 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.key
	I1212 12:17:44.965862    6560 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.key
	I1212 12:17:44.965950    6560 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.key
	I1212 12:17:44.966005    6560 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.key.ff8d457b
	I1212 12:17:44.966068    6560 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.key
	I1212 12:17:44.966076    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 12:17:44.966096    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 12:17:44.966114    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 12:17:44.966130    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 12:17:44.966146    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 12:17:44.966162    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 12:17:44.966181    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 12:17:44.966196    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 12:17:44.966278    6560 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198.pem (1338 bytes)
	W1212 12:17:44.966314    6560 certs.go:433] ignoring /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198_empty.pem, impossibly tiny 0 bytes
	I1212 12:17:44.966323    6560 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 12:17:44.966356    6560 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem (1078 bytes)
	I1212 12:17:44.966388    6560 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem (1123 bytes)
	I1212 12:17:44.966419    6560 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem (1675 bytes)
	I1212 12:17:44.966481    6560 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem (1708 bytes)
	I1212 12:17:44.966512    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198.pem -> /usr/share/ca-certificates/3198.pem
	I1212 12:17:44.966531    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> /usr/share/ca-certificates/31982.pem
	I1212 12:17:44.966547    6560 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:17:44.966987    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 12:17:44.984132    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 12:17:45.000821    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 12:17:45.017239    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 12:17:45.034208    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 12:17:45.050739    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 12:17:45.067020    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 12:17:45.085059    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 12:17:45.102464    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198.pem --> /usr/share/ca-certificates/3198.pem (1338 bytes)
	I1212 12:17:45.118531    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem --> /usr/share/ca-certificates/31982.pem (1708 bytes)
	I1212 12:17:45.135348    6560 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 12:17:45.151878    6560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 12:17:45.163363    6560 ssh_runner.go:195] Run: openssl version
	I1212 12:17:45.166685    6560 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1212 12:17:45.166942    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 12:17:45.173477    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:17:45.176319    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 19:58 /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:17:45.176430    6560 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:58 /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:17:45.176472    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:17:45.180104    6560 command_runner.go:130] > b5213941
	I1212 12:17:45.180271    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 12:17:45.187288    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3198.pem && ln -fs /usr/share/ca-certificates/3198.pem /etc/ssl/certs/3198.pem"
	I1212 12:17:45.194800    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3198.pem
	I1212 12:17:45.197803    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:03 /usr/share/ca-certificates/3198.pem
	I1212 12:17:45.197902    6560 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:03 /usr/share/ca-certificates/3198.pem
	I1212 12:17:45.197937    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3198.pem
	I1212 12:17:45.201559    6560 command_runner.go:130] > 51391683
	I1212 12:17:45.201829    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3198.pem /etc/ssl/certs/51391683.0"
	I1212 12:17:45.208284    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31982.pem && ln -fs /usr/share/ca-certificates/31982.pem /etc/ssl/certs/31982.pem"
	I1212 12:17:45.214710    6560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31982.pem
	I1212 12:17:45.217536    6560 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:03 /usr/share/ca-certificates/31982.pem
	I1212 12:17:45.217694    6560 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:03 /usr/share/ca-certificates/31982.pem
	I1212 12:17:45.217730    6560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31982.pem
	I1212 12:17:45.221111    6560 command_runner.go:130] > 3ec20f2e
	I1212 12:17:45.221377    6560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31982.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 12:17:45.228026    6560 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 12:17:45.230956    6560 command_runner.go:130] > ca.crt
	I1212 12:17:45.230966    6560 command_runner.go:130] > ca.key
	I1212 12:17:45.230970    6560 command_runner.go:130] > healthcheck-client.crt
	I1212 12:17:45.230974    6560 command_runner.go:130] > healthcheck-client.key
	I1212 12:17:45.230978    6560 command_runner.go:130] > peer.crt
	I1212 12:17:45.230982    6560 command_runner.go:130] > peer.key
	I1212 12:17:45.230986    6560 command_runner.go:130] > server.crt
	I1212 12:17:45.230995    6560 command_runner.go:130] > server.key
	I1212 12:17:45.231213    6560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 12:17:45.235444    6560 command_runner.go:130] > Certificate will not expire
	I1212 12:17:45.235621    6560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 12:17:45.239434    6560 command_runner.go:130] > Certificate will not expire
	I1212 12:17:45.239595    6560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 12:17:45.243366    6560 command_runner.go:130] > Certificate will not expire
	I1212 12:17:45.243517    6560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 12:17:45.247125    6560 command_runner.go:130] > Certificate will not expire
	I1212 12:17:45.247255    6560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 12:17:45.250645    6560 command_runner.go:130] > Certificate will not expire
	I1212 12:17:45.250852    6560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 12:17:45.254312    6560 command_runner.go:130] > Certificate will not expire
	I1212 12:17:45.254560    6560 kubeadm.go:404] StartCluster: {Name:multinode-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:multinode-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:17:45.254648    6560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 12:17:45.266903    6560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 12:17:45.272711    6560 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 12:17:45.272720    6560 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 12:17:45.272724    6560 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 12:17:45.272727    6560 command_runner.go:130] > member
	I1212 12:17:45.272822    6560 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1212 12:17:45.272835    6560 kubeadm.go:636] restartCluster start
	I1212 12:17:45.272871    6560 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 12:17:45.278690    6560 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:45.278985    6560 kubeconfig.go:135] verify returned: extract IP: "multinode-675000" does not appear in /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:17:45.279051    6560 kubeconfig.go:146] "multinode-675000" context is missing from /Users/jenkins/minikube-integration/17734-1975/kubeconfig - will repair!
	I1212 12:17:45.279239    6560 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/kubeconfig: {Name:mk6d5ef4e0f8c6a055bbd7ff4a33097a831e2d15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:17:45.280136    6560 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:17:45.280321    6560 kapi.go:59] client config for multinode-675000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.key", CAFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 12:17:45.280742    6560 cert_rotation.go:137] Starting client certificate rotation controller
	I1212 12:17:45.280906    6560 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 12:17:45.286934    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:45.286976    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:45.294993    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:45.295004    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:45.295043    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:45.302442    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:45.802555    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:45.802647    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:45.811026    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:46.303688    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:46.303789    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:46.313403    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:46.802790    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:46.802888    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:46.812638    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:47.304567    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:47.304734    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:47.314387    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:47.803678    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:47.803809    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:47.812855    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:48.304216    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:48.304367    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:48.313802    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:48.804524    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:48.804671    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:48.814316    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:49.302823    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:49.302976    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:49.312452    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:49.804551    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:49.804734    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:49.814411    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:50.304523    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:50.304673    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:50.314330    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:50.803933    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:50.804070    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:50.813879    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:51.302487    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:51.302597    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:51.312127    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:51.804489    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:51.804653    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:51.814176    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:52.302699    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:52.302845    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:52.312402    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:52.804467    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:52.804626    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:52.814065    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:53.304449    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:53.304629    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:53.314801    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:53.802450    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:53.802583    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:53.812931    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:54.304128    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:54.304262    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:54.314914    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:54.802743    6560 api_server.go:166] Checking apiserver status ...
	I1212 12:17:54.802903    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 12:17:54.813399    6560 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 12:17:55.287259    6560 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1212 12:17:55.287344    6560 kubeadm.go:1135] stopping kube-system containers ...
	I1212 12:17:55.287442    6560 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 12:17:55.302526    6560 command_runner.go:130] > 5139a190a0a7
	I1212 12:17:55.302538    6560 command_runner.go:130] > 0b9a6a315bae
	I1212 12:17:55.302542    6560 command_runner.go:130] > 6800e8084788
	I1212 12:17:55.302545    6560 command_runner.go:130] > 906956fbad37
	I1212 12:17:55.302549    6560 command_runner.go:130] > a391a1302e24
	I1212 12:17:55.302552    6560 command_runner.go:130] > 5c4ec41a543b
	I1212 12:17:55.302555    6560 command_runner.go:130] > c4d605b91fef
	I1212 12:17:55.302559    6560 command_runner.go:130] > c6f5291d5248
	I1212 12:17:55.302562    6560 command_runner.go:130] > ec1ccfe051cf
	I1212 12:17:55.302566    6560 command_runner.go:130] > 0dfb53ca1162
	I1212 12:17:55.302570    6560 command_runner.go:130] > 2e3863acd67e
	I1212 12:17:55.302579    6560 command_runner.go:130] > 6a5980fcc6dc
	I1212 12:17:55.302583    6560 command_runner.go:130] > ec16ed874303
	I1212 12:17:55.302586    6560 command_runner.go:130] > 5365eadc60c2
	I1212 12:17:55.302589    6560 command_runner.go:130] > 32f46c3efb2c
	I1212 12:17:55.302593    6560 command_runner.go:130] > 759eb904c17a
	I1212 12:17:55.303231    6560 docker.go:469] Stopping containers: [5139a190a0a7 0b9a6a315bae 6800e8084788 906956fbad37 a391a1302e24 5c4ec41a543b c4d605b91fef c6f5291d5248 ec1ccfe051cf 0dfb53ca1162 2e3863acd67e 6a5980fcc6dc ec16ed874303 5365eadc60c2 32f46c3efb2c 759eb904c17a]
	I1212 12:17:55.303361    6560 ssh_runner.go:195] Run: docker stop 5139a190a0a7 0b9a6a315bae 6800e8084788 906956fbad37 a391a1302e24 5c4ec41a543b c4d605b91fef c6f5291d5248 ec1ccfe051cf 0dfb53ca1162 2e3863acd67e 6a5980fcc6dc ec16ed874303 5365eadc60c2 32f46c3efb2c 759eb904c17a
	I1212 12:17:55.318028    6560 command_runner.go:130] > 5139a190a0a7
	I1212 12:17:55.318040    6560 command_runner.go:130] > 0b9a6a315bae
	I1212 12:17:55.318043    6560 command_runner.go:130] > 6800e8084788
	I1212 12:17:55.318047    6560 command_runner.go:130] > 906956fbad37
	I1212 12:17:55.318051    6560 command_runner.go:130] > a391a1302e24
	I1212 12:17:55.318054    6560 command_runner.go:130] > 5c4ec41a543b
	I1212 12:17:55.318058    6560 command_runner.go:130] > c4d605b91fef
	I1212 12:17:55.318061    6560 command_runner.go:130] > c6f5291d5248
	I1212 12:17:55.318065    6560 command_runner.go:130] > ec1ccfe051cf
	I1212 12:17:55.318069    6560 command_runner.go:130] > 0dfb53ca1162
	I1212 12:17:55.318072    6560 command_runner.go:130] > 2e3863acd67e
	I1212 12:17:55.318076    6560 command_runner.go:130] > 6a5980fcc6dc
	I1212 12:17:55.318079    6560 command_runner.go:130] > ec16ed874303
	I1212 12:17:55.318217    6560 command_runner.go:130] > 5365eadc60c2
	I1212 12:17:55.318224    6560 command_runner.go:130] > 32f46c3efb2c
	I1212 12:17:55.318232    6560 command_runner.go:130] > 759eb904c17a
	I1212 12:17:55.318909    6560 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 12:17:55.330233    6560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 12:17:55.337093    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1212 12:17:55.337105    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1212 12:17:55.337111    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1212 12:17:55.337116    6560 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 12:17:55.337159    6560 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 12:17:55.337201    6560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 12:17:55.343901    6560 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1212 12:17:55.343912    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 12:17:55.412984    6560 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 12:17:55.413336    6560 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1212 12:17:55.413775    6560 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1212 12:17:55.414148    6560 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 12:17:55.414666    6560 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1212 12:17:55.415098    6560 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1212 12:17:55.415593    6560 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1212 12:17:55.415950    6560 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1212 12:17:55.416294    6560 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1212 12:17:55.416698    6560 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 12:17:55.417054    6560 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 12:17:55.417470    6560 command_runner.go:130] > [certs] Using the existing "sa" key
	I1212 12:17:55.418430    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 12:17:55.452456    6560 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 12:17:55.551967    6560 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 12:17:55.878702    6560 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 12:17:56.289204    6560 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 12:17:56.336470    6560 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 12:17:56.338594    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 12:17:56.383734    6560 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 12:17:56.385118    6560 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 12:17:56.385181    6560 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1212 12:17:56.481284    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 12:17:56.530269    6560 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 12:17:56.530283    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 12:17:56.532670    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 12:17:56.532778    6560 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 12:17:56.537820    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 12:17:56.587575    6560 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 12:17:56.592823    6560 api_server.go:52] waiting for apiserver process to appear ...
	I1212 12:17:56.592874    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:17:56.601821    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:17:57.111997    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:17:57.612553    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:17:58.112078    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:17:58.611851    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:17:58.623519    6560 command_runner.go:130] > 1753
	I1212 12:17:58.623752    6560 api_server.go:72] duration metric: took 2.03095901s to wait for apiserver process to appear ...
	I1212 12:17:58.623764    6560 api_server.go:88] waiting for apiserver healthz status ...
	I1212 12:17:58.623779    6560 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 12:18:01.282638    6560 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 12:18:01.282656    6560 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 12:18:01.282667    6560 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 12:18:01.338492    6560 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 12:18:01.338512    6560 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 12:18:01.839536    6560 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 12:18:01.844435    6560 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 12:18:01.844448    6560 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 12:18:02.338603    6560 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 12:18:02.343399    6560 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1212 12:18:02.343413    6560 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1212 12:18:02.839658    6560 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 12:18:02.843083    6560 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1212 12:18:02.843142    6560 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I1212 12:18:02.843149    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:02.843157    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:02.843163    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:02.848500    6560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 12:18:02.848512    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:02.848517    6560 round_trippers.go:580]     Audit-Id: a114c695-15f8-4704-b914-6b21717ea243
	I1212 12:18:02.848530    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:02.848535    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:02.848540    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:02.848545    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:02.848550    6560 round_trippers.go:580]     Content-Length: 264
	I1212 12:18:02.848555    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:02 GMT
	I1212 12:18:02.848574    6560 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 12:18:02.848626    6560 api_server.go:141] control plane version: v1.28.4
	I1212 12:18:02.848637    6560 api_server.go:131] duration metric: took 4.224930819s to wait for apiserver health ...
	I1212 12:18:02.848642    6560 cni.go:84] Creating CNI manager for ""
	I1212 12:18:02.848646    6560 cni.go:136] 1 nodes found, recommending kindnet
	I1212 12:18:02.872739    6560 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 12:18:02.892646    6560 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 12:18:02.895930    6560 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1212 12:18:02.895944    6560 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1212 12:18:02.895949    6560 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1212 12:18:02.895954    6560 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 12:18:02.895959    6560 command_runner.go:130] > Access: 2023-12-12 20:17:38.635909590 +0000
	I1212 12:18:02.895964    6560 command_runner.go:130] > Modify: 2023-12-08 06:25:18.000000000 +0000
	I1212 12:18:02.895968    6560 command_runner.go:130] > Change: 2023-12-12 20:17:36.831137047 +0000
	I1212 12:18:02.895972    6560 command_runner.go:130] >  Birth: -
	I1212 12:18:02.896068    6560 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1212 12:18:02.896077    6560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1212 12:18:02.907856    6560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 12:18:03.807792    6560 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1212 12:18:03.810044    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1212 12:18:03.811607    6560 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1212 12:18:03.821945    6560 command_runner.go:130] > daemonset.apps/kindnet configured
	I1212 12:18:03.823839    6560 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 12:18:03.823897    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:18:03.823902    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:03.823908    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:03.823914    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:03.827332    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:18:03.827347    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:03.827353    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:03 GMT
	I1212 12:18:03.827363    6560 round_trippers.go:580]     Audit-Id: c21f3ad7-8cb8-4c32-9c9c-f33fd47f6124
	I1212 12:18:03.827369    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:03.827373    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:03.827378    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:03.827383    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:03.828670    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"528"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57122 chars]
	I1212 12:18:03.831090    6560 system_pods.go:59] 8 kube-system pods found
	I1212 12:18:03.831111    6560 system_pods.go:61] "coredns-5dd5756b68-2qgqq" [6bc47af7-f871-4daa-97ca-23500d80fc1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 12:18:03.831117    6560 system_pods.go:61] "etcd-multinode-675000" [bca57b7b-a960-4492-8f79-e6f8aa87f070] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 12:18:03.831122    6560 system_pods.go:61] "kindnet-4vq6m" [c528f3f9-a180-497c-892d-0305174740c9] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1212 12:18:03.831129    6560 system_pods.go:61] "kube-apiserver-multinode-675000" [8c377a02-06d4-44e2-a275-5a72e7917a90] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 12:18:03.831134    6560 system_pods.go:61] "kube-controller-manager-multinode-675000" [d99bab41-1594-4f91-b6cf-63f143cbd1fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 12:18:03.831138    6560 system_pods.go:61] "kube-proxy-q4dfx" [2a62b5cc-b780-4ef5-8663-4a01ca0e2932] Running
	I1212 12:18:03.831144    6560 system_pods.go:61] "kube-scheduler-multinode-675000" [a51d1149-64de-4c6e-a8ae-d04d45097278] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 12:18:03.831149    6560 system_pods.go:61] "storage-provisioner" [6f39d754-bc48-49e5-a0e4-fda2cbf521b7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 12:18:03.831157    6560 system_pods.go:74] duration metric: took 7.310535ms to wait for pod list to return data ...
	I1212 12:18:03.831164    6560 node_conditions.go:102] verifying NodePressure condition ...
	I1212 12:18:03.831202    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I1212 12:18:03.831206    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:03.831212    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:03.831219    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:03.833239    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:03.833250    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:03.833256    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:03.833272    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:03.833280    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:03 GMT
	I1212 12:18:03.833286    6560 round_trippers.go:580]     Audit-Id: 04414e50-4ae5-43bc-8fae-ba42c6cbcce9
	I1212 12:18:03.833291    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:03.833307    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:03.833362    6560 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"528"},"items":[{"metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5183 chars]
	I1212 12:18:03.833706    6560 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 12:18:03.833721    6560 node_conditions.go:123] node cpu capacity is 2
	I1212 12:18:03.833733    6560 node_conditions.go:105] duration metric: took 2.564642ms to run NodePressure ...
	I1212 12:18:03.833743    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 12:18:03.933026    6560 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1212 12:18:03.965902    6560 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1212 12:18:03.966932    6560 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1212 12:18:03.966989    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1212 12:18:03.966994    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:03.967000    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:03.967006    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:03.969050    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:03.969060    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:03.969065    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:03.969089    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:03.969098    6560 round_trippers.go:580]     Audit-Id: c9bfc1ea-0cfd-40a1-bd7f-69f6632e3df3
	I1212 12:18:03.969103    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:03.969108    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:03.969114    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:03.969690    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"etcd-multinode-675000","namespace":"kube-system","uid":"bca57b7b-a960-4492-8f79-e6f8aa87f070","resourceVersion":"462","creationTimestamp":"2023-12-12T20:16:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"b8a6875b46c6a0a1242452e56d9fe808","kubernetes.io/config.mirror":"b8a6875b46c6a0a1242452e56d9fe808","kubernetes.io/config.seen":"2023-12-12T20:16:44.273254977Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 29734 chars]
	I1212 12:18:03.970391    6560 kubeadm.go:787] kubelet initialised
	I1212 12:18:03.970400    6560 kubeadm.go:788] duration metric: took 3.458778ms waiting for restarted kubelet to initialise ...
	I1212 12:18:03.970407    6560 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 12:18:03.970435    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:18:03.970440    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:03.970446    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:03.970451    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:03.972593    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:03.972614    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:03.972627    6560 round_trippers.go:580]     Audit-Id: f5170016-c99e-412a-9f9c-35cb3b40f90d
	I1212 12:18:03.972642    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:03.972652    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:03.972661    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:03.972670    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:03.972678    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:03.973163    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57122 chars]
	I1212 12:18:03.974592    6560 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:03.974639    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:03.974645    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:03.974651    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:03.974656    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:03.976391    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:03.976407    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:03.976413    6560 round_trippers.go:580]     Audit-Id: 89695d5d-d6dc-42c0-b62a-86a984b9a356
	I1212 12:18:03.976419    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:03.976426    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:03.976432    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:03.976436    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:03.976443    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:03.976515    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:03.976801    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:03.976808    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:03.976814    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:03.976819    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:03.978490    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:03.978502    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:03.978510    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:03.978518    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:03.978525    6560 round_trippers.go:580]     Audit-Id: 6855ff74-573e-463f-b7bf-dffef738d274
	I1212 12:18:03.978535    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:03.978546    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:03.978554    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:03.978852    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:03.979050    6560 pod_ready.go:97] node "multinode-675000" hosting pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:03.979060    6560 pod_ready.go:81] duration metric: took 4.458396ms waiting for pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace to be "Ready" ...
	E1212 12:18:03.979067    6560 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-675000" hosting pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:03.979078    6560 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:03.979116    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-675000
	I1212 12:18:03.979120    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:03.979126    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:03.979133    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:03.980825    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:03.980837    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:03.980843    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:03.980850    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:03.980859    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:03.980867    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:03.980873    6560 round_trippers.go:580]     Audit-Id: 0754ec1b-f041-4a77-b213-c3517ceec8aa
	I1212 12:18:03.980877    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:03.980966    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-675000","namespace":"kube-system","uid":"bca57b7b-a960-4492-8f79-e6f8aa87f070","resourceVersion":"462","creationTimestamp":"2023-12-12T20:16:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"b8a6875b46c6a0a1242452e56d9fe808","kubernetes.io/config.mirror":"b8a6875b46c6a0a1242452e56d9fe808","kubernetes.io/config.seen":"2023-12-12T20:16:44.273254977Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6285 chars]
	I1212 12:18:03.981221    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:03.981228    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:03.981234    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:03.981240    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:03.982673    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:03.982683    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:03.982688    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:03.982694    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:03.982703    6560 round_trippers.go:580]     Audit-Id: 2931d57b-dfad-4f5c-87b8-9d55f17a0885
	I1212 12:18:03.982711    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:03.982716    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:03.982721    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:03.982874    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:03.983079    6560 pod_ready.go:97] node "multinode-675000" hosting pod "etcd-multinode-675000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:03.983092    6560 pod_ready.go:81] duration metric: took 4.009007ms waiting for pod "etcd-multinode-675000" in "kube-system" namespace to be "Ready" ...
	E1212 12:18:03.983099    6560 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-675000" hosting pod "etcd-multinode-675000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:03.983108    6560 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:03.983139    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-675000
	I1212 12:18:03.983144    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:03.983150    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:03.983156    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:03.984531    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:03.984541    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:03.984548    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:03.984555    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:03.984562    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:03.984568    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:03.984574    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:03.984579    6560 round_trippers.go:580]     Audit-Id: 88a9e69d-3e2f-4bee-954b-62bc87dcbcf6
	I1212 12:18:03.984792    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-675000","namespace":"kube-system","uid":"8c377a02-06d4-44e2-a275-5a72e7917a90","resourceVersion":"463","creationTimestamp":"2023-12-12T20:16:51Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"a93d2462fc4179c4ac4fea222dfb096b","kubernetes.io/config.mirror":"a93d2462fc4179c4ac4fea222dfb096b","kubernetes.io/config.seen":"2023-12-12T20:16:51.301865289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7841 chars]
	I1212 12:18:03.985051    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:03.985058    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:03.985064    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:03.985069    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:03.986568    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:03.986576    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:03.986581    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:03.986585    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:03.986591    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:03.986595    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:03.986600    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:03.986604    6560 round_trippers.go:580]     Audit-Id: a6ac9097-9070-4cdc-959c-90e073ae7992
	I1212 12:18:03.986816    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:03.986991    6560 pod_ready.go:97] node "multinode-675000" hosting pod "kube-apiserver-multinode-675000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:03.987001    6560 pod_ready.go:81] duration metric: took 3.887404ms waiting for pod "kube-apiserver-multinode-675000" in "kube-system" namespace to be "Ready" ...
	E1212 12:18:03.987007    6560 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-675000" hosting pod "kube-apiserver-multinode-675000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:03.987017    6560 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:04.024791    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-675000
	I1212 12:18:04.024803    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:04.024810    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:04.024815    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:04.026511    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:04.026522    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:04.026527    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:04.026533    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:04.026538    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:04.026545    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:04.026555    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:04.026570    6560 round_trippers.go:580]     Audit-Id: 91c27c55-2d7c-493c-be96-5a17dcc42377
	I1212 12:18:04.026698    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-675000","namespace":"kube-system","uid":"d99bab41-1594-4f91-b6cf-63f143cbd1fb","resourceVersion":"464","creationTimestamp":"2023-12-12T20:16:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e889149f3645071732e65c53e76071e","kubernetes.io/config.mirror":"8e889149f3645071732e65c53e76071e","kubernetes.io/config.seen":"2023-12-12T20:16:51.301865920Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7432 chars]
	I1212 12:18:04.224033    6560 request.go:629] Waited for 197.031502ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:04.224095    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:04.224109    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:04.224160    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:04.224172    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:04.226744    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:04.226760    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:04.226769    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:04.226776    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:04.226782    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:04.226788    6560 round_trippers.go:580]     Audit-Id: d18f3ee7-b354-4001-8540-ea7a3bfc9fc3
	I1212 12:18:04.226794    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:04.226802    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:04.226895    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:04.227166    6560 pod_ready.go:97] node "multinode-675000" hosting pod "kube-controller-manager-multinode-675000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:04.227183    6560 pod_ready.go:81] duration metric: took 240.159347ms waiting for pod "kube-controller-manager-multinode-675000" in "kube-system" namespace to be "Ready" ...
	E1212 12:18:04.227193    6560 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-675000" hosting pod "kube-controller-manager-multinode-675000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:04.227201    6560 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q4dfx" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:04.424175    6560 request.go:629] Waited for 196.91659ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q4dfx
	I1212 12:18:04.424251    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q4dfx
	I1212 12:18:04.424262    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:04.424276    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:04.424286    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:04.427152    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:04.427166    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:04.427174    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:04.427180    6560 round_trippers.go:580]     Audit-Id: 82860f3c-baf8-4377-b386-fc31e1dbb783
	I1212 12:18:04.427187    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:04.427193    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:04.427200    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:04.427206    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:04.427296    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q4dfx","generateName":"kube-proxy-","namespace":"kube-system","uid":"2a62b5cc-b780-4ef5-8663-4a01ca0e2932","resourceVersion":"474","creationTimestamp":"2023-12-12T20:17:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5e692c0d-042c-458d-9e34-28feed1938bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e692c0d-042c-458d-9e34-28feed1938bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1212 12:18:04.625186    6560 request.go:629] Waited for 197.535644ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:04.625234    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:04.625242    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:04.625253    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:04.625263    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:04.628212    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:04.628227    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:04.628235    6560 round_trippers.go:580]     Audit-Id: bf873928-adc1-410b-a857-c095cd5876ea
	I1212 12:18:04.628241    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:04.628250    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:04.628260    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:04.628269    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:04.628301    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:04.628571    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:04.628784    6560 pod_ready.go:97] node "multinode-675000" hosting pod "kube-proxy-q4dfx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:04.628797    6560 pod_ready.go:81] duration metric: took 401.583289ms waiting for pod "kube-proxy-q4dfx" in "kube-system" namespace to be "Ready" ...
	E1212 12:18:04.628804    6560 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-675000" hosting pod "kube-proxy-q4dfx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:04.628816    6560 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:04.824075    6560 request.go:629] Waited for 195.183615ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-675000
	I1212 12:18:04.824121    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-675000
	I1212 12:18:04.824127    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:04.824134    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:04.824140    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:04.825915    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:04.825926    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:04.825931    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:04.825938    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:04.825945    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:04 GMT
	I1212 12:18:04.825956    6560 round_trippers.go:580]     Audit-Id: bc37c94c-89ea-45be-bebf-e7d38c35bef1
	I1212 12:18:04.825969    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:04.825979    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:04.826062    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-675000","namespace":"kube-system","uid":"a51d1149-64de-4c6e-a8ae-d04d45097278","resourceVersion":"469","creationTimestamp":"2023-12-12T20:16:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"94c171efd7c72f0a76d945c5e6e993d1","kubernetes.io/config.mirror":"94c171efd7c72f0a76d945c5e6e993d1","kubernetes.io/config.seen":"2023-12-12T20:16:51.301860165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5144 chars]
	I1212 12:18:05.024154    6560 request.go:629] Waited for 197.77355ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:05.024210    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:05.024220    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:05.024232    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:05.024242    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:05.027653    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:18:05.027667    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:05.027675    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:05.027681    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:05.027687    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:05 GMT
	I1212 12:18:05.027693    6560 round_trippers.go:580]     Audit-Id: a687c04a-e53c-4a6e-8d71-93a176b90c57
	I1212 12:18:05.027699    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:05.027705    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:05.027823    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:05.028082    6560 pod_ready.go:97] node "multinode-675000" hosting pod "kube-scheduler-multinode-675000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:05.028099    6560 pod_ready.go:81] duration metric: took 399.283526ms waiting for pod "kube-scheduler-multinode-675000" in "kube-system" namespace to be "Ready" ...
	E1212 12:18:05.028107    6560 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-675000" hosting pod "kube-scheduler-multinode-675000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-675000" has status "Ready":"False"
	I1212 12:18:05.028114    6560 pod_ready.go:38] duration metric: took 1.057716117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 12:18:05.028126    6560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 12:18:05.036669    6560 command_runner.go:130] > -16
	I1212 12:18:05.036738    6560 ops.go:34] apiserver oom_adj: -16
	I1212 12:18:05.036774    6560 kubeadm.go:640] restartCluster took 19.764219808s
	I1212 12:18:05.036780    6560 kubeadm.go:406] StartCluster complete in 19.782515789s
	I1212 12:18:05.036792    6560 settings.go:142] acquiring lock: {Name:mk437dff6ee4f62ea2311e5ad7dccf890596936f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:18:05.036883    6560 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:18:05.037402    6560 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/kubeconfig: {Name:mk6d5ef4e0f8c6a055bbd7ff4a33097a831e2d15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:18:05.037708    6560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 12:18:05.037745    6560 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 12:18:05.037801    6560 addons.go:69] Setting storage-provisioner=true in profile "multinode-675000"
	I1212 12:18:05.037805    6560 addons.go:69] Setting default-storageclass=true in profile "multinode-675000"
	I1212 12:18:05.037820    6560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-675000"
	I1212 12:18:05.037831    6560 addons.go:231] Setting addon storage-provisioner=true in "multinode-675000"
	W1212 12:18:05.037840    6560 addons.go:240] addon storage-provisioner should already be in state true
	I1212 12:18:05.037877    6560 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:18:05.037887    6560 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:18:05.038075    6560 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:05.038095    6560 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:05.038145    6560 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:18:05.038168    6560 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:05.038191    6560 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:05.039150    6560 kapi.go:59] client config for multinode-675000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.key", CAFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 12:18:05.042089    6560 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1212 12:18:05.042165    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:05.042176    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:05.042184    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:05.044626    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:05.044642    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:05.044648    6560 round_trippers.go:580]     Content-Length: 291
	I1212 12:18:05.044653    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:05 GMT
	I1212 12:18:05.044659    6560 round_trippers.go:580]     Audit-Id: 27c18b0a-3ab3-4a47-a131-a8b926644adf
	I1212 12:18:05.044665    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:05.044671    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:05.044675    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:05.044682    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:05.044701    6560 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1a3fb229-343b-479d-911a-188712e3cca3","resourceVersion":"529","creationTimestamp":"2023-12-12T20:16:51Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1212 12:18:05.044854    6560 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-675000" context rescaled to 1 replicas
	I1212 12:18:05.044880    6560 start.go:223] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 12:18:05.086616    6560 out.go:177] * Verifying Kubernetes components...
	I1212 12:18:05.048125    6560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51442
	I1212 12:18:05.049102    6560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51443
	I1212 12:18:05.107512    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 12:18:05.087131    6560 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:05.107924    6560 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:05.108054    6560 main.go:141] libmachine: Using API Version  1
	I1212 12:18:05.108068    6560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:05.108259    6560 main.go:141] libmachine: Using API Version  1
	I1212 12:18:05.108273    6560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:05.108282    6560 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:05.108424    6560 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:18:05.108498    6560 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:05.108529    6560 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:18:05.108628    6560 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6575
	I1212 12:18:05.108851    6560 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:05.108875    6560 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:05.110968    6560 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:18:05.111209    6560 kapi.go:59] client config for multinode-675000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000/client.key", CAFile:"/Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 12:18:05.111436    6560 addons.go:231] Setting addon default-storageclass=true in "multinode-675000"
	W1212 12:18:05.111445    6560 addons.go:240] addon default-storageclass should already be in state true
	I1212 12:18:05.111460    6560 host.go:66] Checking if "multinode-675000" exists ...
	I1212 12:18:05.111717    6560 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:05.111742    6560 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:05.116936    6560 command_runner.go:130] > apiVersion: v1
	I1212 12:18:05.116961    6560 command_runner.go:130] > data:
	I1212 12:18:05.116964    6560 command_runner.go:130] >   Corefile: |
	I1212 12:18:05.116967    6560 command_runner.go:130] >     .:53 {
	I1212 12:18:05.116971    6560 command_runner.go:130] >         log
	I1212 12:18:05.116977    6560 command_runner.go:130] >         errors
	I1212 12:18:05.116981    6560 command_runner.go:130] >         health {
	I1212 12:18:05.116986    6560 command_runner.go:130] >            lameduck 5s
	I1212 12:18:05.116991    6560 command_runner.go:130] >         }
	I1212 12:18:05.116995    6560 command_runner.go:130] >         ready
	I1212 12:18:05.117001    6560 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1212 12:18:05.117007    6560 command_runner.go:130] >            pods insecure
	I1212 12:18:05.117020    6560 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1212 12:18:05.117025    6560 command_runner.go:130] >            ttl 30
	I1212 12:18:05.117028    6560 command_runner.go:130] >         }
	I1212 12:18:05.117032    6560 command_runner.go:130] >         prometheus :9153
	I1212 12:18:05.117035    6560 command_runner.go:130] >         hosts {
	I1212 12:18:05.117040    6560 command_runner.go:130] >            192.169.0.1 host.minikube.internal
	I1212 12:18:05.117044    6560 command_runner.go:130] >            fallthrough
	I1212 12:18:05.117047    6560 command_runner.go:130] >         }
	I1212 12:18:05.117051    6560 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1212 12:18:05.117056    6560 command_runner.go:130] >            max_concurrent 1000
	I1212 12:18:05.117059    6560 command_runner.go:130] >         }
	I1212 12:18:05.117063    6560 command_runner.go:130] >         cache 30
	I1212 12:18:05.117072    6560 command_runner.go:130] >         loop
	I1212 12:18:05.117078    6560 command_runner.go:130] >         reload
	I1212 12:18:05.117082    6560 command_runner.go:130] >         loadbalance
	I1212 12:18:05.117085    6560 command_runner.go:130] >     }
	I1212 12:18:05.117089    6560 command_runner.go:130] > kind: ConfigMap
	I1212 12:18:05.117098    6560 command_runner.go:130] > metadata:
	I1212 12:18:05.117104    6560 command_runner.go:130] >   creationTimestamp: "2023-12-12T20:16:51Z"
	I1212 12:18:05.117107    6560 command_runner.go:130] >   name: coredns
	I1212 12:18:05.117110    6560 command_runner.go:130] >   namespace: kube-system
	I1212 12:18:05.117114    6560 command_runner.go:130] >   resourceVersion: "401"
	I1212 12:18:05.117118    6560 command_runner.go:130] >   uid: 0a612017-7a35-4efe-a969-615a6e8509a6
	I1212 12:18:05.117201    6560 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1212 12:18:05.117249    6560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51446
	I1212 12:18:05.117585    6560 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:05.117927    6560 main.go:141] libmachine: Using API Version  1
	I1212 12:18:05.117940    6560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:05.118175    6560 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:05.118301    6560 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:18:05.118398    6560 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:18:05.118467    6560 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6575
	I1212 12:18:05.119555    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:18:05.140594    6560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 12:18:05.120055    6560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51448
	I1212 12:18:05.121553    6560 node_ready.go:35] waiting up to 6m0s for node "multinode-675000" to be "Ready" ...
	I1212 12:18:05.161639    6560 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 12:18:05.161650    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 12:18:05.161663    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:18:05.161848    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:18:05.161959    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:18:05.162042    6560 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:05.162066    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:18:05.162164    6560 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:18:05.162405    6560 main.go:141] libmachine: Using API Version  1
	I1212 12:18:05.162414    6560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:05.162684    6560 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:05.163047    6560 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:18:05.163088    6560 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:18:05.171203    6560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51451
	I1212 12:18:05.171523    6560 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:18:05.171894    6560 main.go:141] libmachine: Using API Version  1
	I1212 12:18:05.171910    6560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:18:05.172345    6560 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:18:05.172500    6560 main.go:141] libmachine: (multinode-675000) Calling .GetState
	I1212 12:18:05.172588    6560 main.go:141] libmachine: (multinode-675000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:18:05.172683    6560 main.go:141] libmachine: (multinode-675000) DBG | hyperkit pid from json: 6575
	I1212 12:18:05.173805    6560 main.go:141] libmachine: (multinode-675000) Calling .DriverName
	I1212 12:18:05.174001    6560 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 12:18:05.174010    6560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 12:18:05.174019    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHHostname
	I1212 12:18:05.174137    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHPort
	I1212 12:18:05.174230    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHKeyPath
	I1212 12:18:05.174338    6560 main.go:141] libmachine: (multinode-675000) Calling .GetSSHUsername
	I1212 12:18:05.174430    6560 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000/id_rsa Username:docker}
	I1212 12:18:05.208456    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 12:18:05.224322    6560 request.go:629] Waited for 62.703782ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:05.224362    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:05.224367    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:05.224374    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:05.224380    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:05.226343    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:05.226362    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:05.226369    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:05.226373    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:05.226378    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:05.226383    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:05 GMT
	I1212 12:18:05.226396    6560 round_trippers.go:580]     Audit-Id: 9a31e352-4ac2-4c95-9b26-69fefa3f0319
	I1212 12:18:05.226402    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:05.226503    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:05.254081    6560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 12:18:05.424620    6560 request.go:629] Waited for 197.765954ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:05.424673    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:05.424678    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:05.424685    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:05.424705    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:05.426612    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:05.426624    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:05.426630    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:05.426634    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:05.426639    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:05.426644    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:05.426649    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:05 GMT
	I1212 12:18:05.426653    6560 round_trippers.go:580]     Audit-Id: 795cbf60-59e6-4394-8f9b-52342c8abb63
	I1212 12:18:05.426728    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:05.642104    6560 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1212 12:18:05.644587    6560 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1212 12:18:05.647667    6560 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1212 12:18:05.650029    6560 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1212 12:18:05.651758    6560 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1212 12:18:05.660988    6560 command_runner.go:130] > pod/storage-provisioner configured
	I1212 12:18:05.663706    6560 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1212 12:18:05.663752    6560 main.go:141] libmachine: Making call to close driver server
	I1212 12:18:05.663763    6560 main.go:141] libmachine: (multinode-675000) Calling .Close
	I1212 12:18:05.663860    6560 main.go:141] libmachine: Making call to close driver server
	I1212 12:18:05.663871    6560 main.go:141] libmachine: (multinode-675000) Calling .Close
	I1212 12:18:05.663923    6560 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:18:05.663924    6560 main.go:141] libmachine: (multinode-675000) DBG | Closing plugin on server side
	I1212 12:18:05.663940    6560 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:18:05.663954    6560 main.go:141] libmachine: Making call to close driver server
	I1212 12:18:05.663963    6560 main.go:141] libmachine: (multinode-675000) Calling .Close
	I1212 12:18:05.664039    6560 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:18:05.664051    6560 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:18:05.664061    6560 main.go:141] libmachine: Making call to close driver server
	I1212 12:18:05.664066    6560 main.go:141] libmachine: (multinode-675000) DBG | Closing plugin on server side
	I1212 12:18:05.664071    6560 main.go:141] libmachine: (multinode-675000) Calling .Close
	I1212 12:18:05.664118    6560 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:18:05.664142    6560 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:18:05.664161    6560 main.go:141] libmachine: (multinode-675000) DBG | Closing plugin on server side
	I1212 12:18:05.664222    6560 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I1212 12:18:05.664230    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:05.664240    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:05.664247    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:05.664260    6560 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:18:05.664270    6560 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:18:05.664281    6560 main.go:141] libmachine: (multinode-675000) DBG | Closing plugin on server side
	I1212 12:18:05.665968    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:05.665978    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:05.665983    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:05.665988    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:05.665993    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:05.665999    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:05.666005    6560 round_trippers.go:580]     Content-Length: 1273
	I1212 12:18:05.666009    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:05 GMT
	I1212 12:18:05.666014    6560 round_trippers.go:580]     Audit-Id: 656e944d-4269-42ce-a5ee-1535e89b9509
	I1212 12:18:05.666038    6560 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"535"},"items":[{"metadata":{"name":"standard","uid":"8c461a28-e249-492c-9549-9d63fb276924","resourceVersion":"402","creationTimestamp":"2023-12-12T20:17:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T20:17:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1212 12:18:05.666385    6560 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8c461a28-e249-492c-9549-9d63fb276924","resourceVersion":"402","creationTimestamp":"2023-12-12T20:17:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T20:17:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 12:18:05.666415    6560 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1212 12:18:05.666419    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:05.666425    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:05.666433    6560 round_trippers.go:473]     Content-Type: application/json
	I1212 12:18:05.666438    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:05.668601    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:05.668611    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:05.668616    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:05.668621    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:05.668626    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:05.668630    6560 round_trippers.go:580]     Content-Length: 1220
	I1212 12:18:05.668635    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:05 GMT
	I1212 12:18:05.668640    6560 round_trippers.go:580]     Audit-Id: f515fe0b-0f04-4459-b052-f9706c27aeeb
	I1212 12:18:05.668647    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:05.668677    6560 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8c461a28-e249-492c-9549-9d63fb276924","resourceVersion":"402","creationTimestamp":"2023-12-12T20:17:05Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-12T20:17:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1212 12:18:05.668751    6560 main.go:141] libmachine: Making call to close driver server
	I1212 12:18:05.668760    6560 main.go:141] libmachine: (multinode-675000) Calling .Close
	I1212 12:18:05.668905    6560 main.go:141] libmachine: (multinode-675000) DBG | Closing plugin on server side
	I1212 12:18:05.668912    6560 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:18:05.668920    6560 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:18:05.690851    6560 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 12:18:05.732272    6560 addons.go:502] enable addons completed in 694.5464ms: enabled=[storage-provisioner default-storageclass]
	I1212 12:18:05.927022    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:05.927035    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:05.927042    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:05.927047    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:05.928851    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:05.928863    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:05.928869    6560 round_trippers.go:580]     Audit-Id: cd652539-8ba2-4402-9099-1f01fc8493aa
	I1212 12:18:05.928873    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:05.928894    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:05.928900    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:05.928904    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:05.928909    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:06 GMT
	I1212 12:18:05.929005    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:06.427394    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:06.427410    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:06.427416    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:06.427421    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:06.429490    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:06.429505    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:06.429511    6560 round_trippers.go:580]     Audit-Id: 53e07d91-e337-41b0-95f7-1d273d46c7f8
	I1212 12:18:06.429517    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:06.429524    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:06.429530    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:06.429536    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:06.429547    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:06 GMT
	I1212 12:18:06.429662    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:06.927544    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:06.927572    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:06.927585    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:06.927595    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:06.930585    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:06.930612    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:06.930621    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:06.930652    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:06.930693    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:06.930704    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:06.930718    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:07 GMT
	I1212 12:18:06.930724    6560 round_trippers.go:580]     Audit-Id: aa2f79d5-e697-4ea3-8816-ff17a8f1cd4f
	I1212 12:18:06.930843    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:07.427015    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:07.427032    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:07.427059    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:07.427065    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:07.428836    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:07.428851    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:07.428863    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:07.428871    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:07 GMT
	I1212 12:18:07.428879    6560 round_trippers.go:580]     Audit-Id: 312cf95d-7fa3-4a28-9b1e-c96179ff07e2
	I1212 12:18:07.428886    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:07.428893    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:07.428900    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:07.429030    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:07.429259    6560 node_ready.go:58] node "multinode-675000" has status "Ready":"False"
	I1212 12:18:07.927199    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:07.927219    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:07.927232    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:07.927242    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:07.930195    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:07.930211    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:07.930220    6560 round_trippers.go:580]     Audit-Id: 2f5dfd30-6cef-4a8c-b17c-cefe1fd0b9cc
	I1212 12:18:07.930229    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:07.930236    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:07.930243    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:07.930264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:07.930271    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:08 GMT
	I1212 12:18:07.930432    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:08.427047    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:08.427070    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:08.427082    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:08.427092    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:08.429717    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:08.429731    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:08.429738    6560 round_trippers.go:580]     Audit-Id: 9beb6005-63f4-4881-9ed2-8f10af68df53
	I1212 12:18:08.429773    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:08.429787    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:08.429794    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:08.429800    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:08.429807    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:08 GMT
	I1212 12:18:08.429938    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:08.927139    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:08.927153    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:08.927162    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:08.927169    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:08.928869    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:08.928880    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:08.928886    6560 round_trippers.go:580]     Audit-Id: cc90156e-0319-410c-8037-05a93d6269a6
	I1212 12:18:08.928891    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:08.928896    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:08.928913    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:08.928918    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:08.928922    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:09 GMT
	I1212 12:18:08.929140    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:09.427123    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:09.427149    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:09.427167    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:09.427177    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:09.429812    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:09.429829    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:09.429840    6560 round_trippers.go:580]     Audit-Id: 3e81486a-4fa2-4ba5-bff5-d18ca34e0afe
	I1212 12:18:09.429849    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:09.429859    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:09.429870    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:09.429879    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:09.429890    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:09 GMT
	I1212 12:18:09.430210    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"461","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5130 chars]
	I1212 12:18:09.430456    6560 node_ready.go:58] node "multinode-675000" has status "Ready":"False"
	I1212 12:18:09.927068    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:09.927093    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:09.927105    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:09.927115    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:09.929911    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:09.929925    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:09.929932    6560 round_trippers.go:580]     Audit-Id: c2d21126-615b-4913-893d-e582ca371683
	I1212 12:18:09.929939    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:09.929945    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:09.929952    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:09.929958    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:09.929965    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:10 GMT
	I1212 12:18:09.930123    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:09.930386    6560 node_ready.go:49] node "multinode-675000" has status "Ready":"True"
	I1212 12:18:09.930403    6560 node_ready.go:38] duration metric: took 4.768914755s waiting for node "multinode-675000" to be "Ready" ...
	I1212 12:18:09.930412    6560 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 12:18:09.930453    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:18:09.930461    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:09.930468    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:09.930476    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:09.932894    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:09.932904    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:09.932910    6560 round_trippers.go:580]     Audit-Id: a7332274-2c98-49c5-aebe-77c207a95b5e
	I1212 12:18:09.932915    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:09.932919    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:09.932925    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:09.932930    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:09.932934    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:10 GMT
	I1212 12:18:09.933457    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"544"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56216 chars]
	I1212 12:18:09.934828    6560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:09.934869    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:09.934874    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:09.934882    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:09.934889    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:09.936740    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:09.936752    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:09.936760    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:09.936788    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:10 GMT
	I1212 12:18:09.936794    6560 round_trippers.go:580]     Audit-Id: 5cabd86b-1b9e-4b10-86e7-9c8cadba6bde
	I1212 12:18:09.936799    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:09.936803    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:09.936807    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:09.936875    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:09.937123    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:09.937130    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:09.937135    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:09.937140    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:09.938461    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:09.938478    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:09.938486    6560 round_trippers.go:580]     Audit-Id: 5e513c82-c868-4bf0-a25c-7d33cb3278e9
	I1212 12:18:09.938494    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:09.938500    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:09.938504    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:09.938509    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:09.938513    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:10 GMT
	I1212 12:18:09.938734    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:09.938956    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:09.938963    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:09.938970    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:09.938975    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:09.940405    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:09.940416    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:09.940424    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:09.940433    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:09.940440    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:09.940448    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:10 GMT
	I1212 12:18:09.940459    6560 round_trippers.go:580]     Audit-Id: 179fa64d-2eea-45e1-8b5c-8e56b9021da2
	I1212 12:18:09.940466    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:09.940637    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:09.940913    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:09.940921    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:09.940927    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:09.940932    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:09.942228    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:09.942240    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:09.942246    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:09.942250    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:09.942255    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:09.942260    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:10 GMT
	I1212 12:18:09.942264    6560 round_trippers.go:580]     Audit-Id: 886a268d-0cd8-4e12-82c4-42cdcb92ac87
	I1212 12:18:09.942269    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:09.942357    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:10.443271    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:10.443302    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:10.443362    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:10.443375    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:10.446225    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:10.446246    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:10.446258    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:10.446270    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:10.446281    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:10.446288    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:10.446324    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:10 GMT
	I1212 12:18:10.446331    6560 round_trippers.go:580]     Audit-Id: 3335e55d-f6c2-4570-bd63-af1e5cf02754
	I1212 12:18:10.446509    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:10.446882    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:10.446890    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:10.446895    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:10.446900    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:10.448222    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:10.448231    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:10.448237    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:10.448242    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:10.448246    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:10 GMT
	I1212 12:18:10.448251    6560 round_trippers.go:580]     Audit-Id: 253b09d6-7cd8-455e-a58d-a4a9808938b4
	I1212 12:18:10.448256    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:10.448260    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:10.448336    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:10.943902    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:10.943929    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:10.943941    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:10.943951    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:10.946316    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:10.946330    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:10.946337    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:11 GMT
	I1212 12:18:10.946343    6560 round_trippers.go:580]     Audit-Id: 6c80b283-3afa-4178-88a9-2737cfa82225
	I1212 12:18:10.946349    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:10.946362    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:10.946369    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:10.946375    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:10.946458    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:10.946814    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:10.946822    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:10.946827    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:10.946832    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:10.948128    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:10.948152    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:10.948160    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:10.948181    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:10.948192    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:10.948198    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:10.948202    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:11 GMT
	I1212 12:18:10.948206    6560 round_trippers.go:580]     Audit-Id: 5bbdb391-c64c-45bd-b0cd-b1ccced125ab
	I1212 12:18:10.948294    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:11.444318    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:11.444344    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:11.444390    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:11.444401    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:11.447377    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:11.447396    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:11.447404    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:11.447410    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:11.447417    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:11.447423    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:11.447430    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:11 GMT
	I1212 12:18:11.447436    6560 round_trippers.go:580]     Audit-Id: f04d56dd-4fb9-421c-b64c-c62edc1beab8
	I1212 12:18:11.447553    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:11.447933    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:11.447942    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:11.447951    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:11.447958    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:11.449159    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:11.449168    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:11.449173    6560 round_trippers.go:580]     Audit-Id: 5d5e2d25-0b0c-4281-a91e-680aea9f4c2f
	I1212 12:18:11.449178    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:11.449183    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:11.449188    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:11.449193    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:11.449197    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:11 GMT
	I1212 12:18:11.449385    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:11.944512    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:11.944539    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:11.944552    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:11.944563    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:11.947405    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:11.947424    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:11.947435    6560 round_trippers.go:580]     Audit-Id: f218dd51-853c-44c5-902b-8e48b939a556
	I1212 12:18:11.947454    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:11.947463    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:11.947469    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:11.947476    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:11.947482    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:12 GMT
	I1212 12:18:11.947652    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:11.948018    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:11.948037    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:11.948047    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:11.948054    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:11.949871    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:11.949879    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:11.949908    6560 round_trippers.go:580]     Audit-Id: 3bcd4eb0-b8ad-41a9-9131-9c8f352b23c9
	I1212 12:18:11.949921    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:11.949926    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:11.949931    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:11.949936    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:11.949941    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:12 GMT
	I1212 12:18:11.950057    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:11.950232    6560 pod_ready.go:102] pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace has status "Ready":"False"
	I1212 12:18:12.442850    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:12.442862    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:12.442870    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:12.442875    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:12.444641    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:12.444653    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:12.444658    6560 round_trippers.go:580]     Audit-Id: 0c426852-da74-4059-b394-ed6430196691
	I1212 12:18:12.444662    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:12.444667    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:12.444672    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:12.444677    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:12.444685    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:12 GMT
	I1212 12:18:12.444749    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:12.445028    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:12.445035    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:12.445041    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:12.445046    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:12.446210    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:12.446222    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:12.446231    6560 round_trippers.go:580]     Audit-Id: ce7bfd64-fa00-4027-b326-eac100a33545
	I1212 12:18:12.446239    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:12.446263    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:12.446273    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:12.446279    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:12.446284    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:12 GMT
	I1212 12:18:12.446389    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:12.943330    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:12.943377    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:12.943388    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:12.943396    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:12.945572    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:12.945608    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:12.945627    6560 round_trippers.go:580]     Audit-Id: 30b17626-425c-4726-a46d-2663fc620977
	I1212 12:18:12.945639    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:12.945646    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:12.945651    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:12.945655    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:12.945660    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:13 GMT
	I1212 12:18:12.945738    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:12.946019    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:12.946026    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:12.946032    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:12.946037    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:12.947301    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:12.947309    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:12.947314    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:12.947319    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:12.947325    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:12.947329    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:12.947334    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:13 GMT
	I1212 12:18:12.947339    6560 round_trippers.go:580]     Audit-Id: 1d85e5a2-47e8-4f84-8bb0-d98dba169cd4
	I1212 12:18:12.947626    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:13.443284    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:13.443309    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:13.443326    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:13.443337    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:13.446285    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:13.446297    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:13.446310    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:13.446321    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:13 GMT
	I1212 12:18:13.446336    6560 round_trippers.go:580]     Audit-Id: 8aca9811-f14c-42a8-ac37-48eecec4446e
	I1212 12:18:13.446343    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:13.446349    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:13.446356    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:13.446455    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:13.446820    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:13.446829    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:13.446837    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:13.446846    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:13.448526    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:13.448535    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:13.448540    6560 round_trippers.go:580]     Audit-Id: 54108971-0e99-45f0-9358-0cc98742cb57
	I1212 12:18:13.448545    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:13.448552    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:13.448565    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:13.448571    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:13.448578    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:13 GMT
	I1212 12:18:13.448774    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:13.943412    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:13.943425    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:13.943432    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:13.943437    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:13.945182    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:13.945191    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:13.945196    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:13.945202    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:13.945207    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:13.945212    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:14 GMT
	I1212 12:18:13.945217    6560 round_trippers.go:580]     Audit-Id: bb99ba7b-e9b6-403d-ad05-d9b9d2cc9043
	I1212 12:18:13.945222    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:13.945304    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:13.945629    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:13.945637    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:13.945643    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:13.945648    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:13.947117    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:13.947128    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:13.947134    6560 round_trippers.go:580]     Audit-Id: f56228f8-9c86-4e07-b95a-39a1223a9398
	I1212 12:18:13.947140    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:13.947147    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:13.947155    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:13.947161    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:13.947166    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:14 GMT
	I1212 12:18:13.947306    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:14.443388    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:14.443408    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:14.443421    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:14.443431    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:14.446223    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:14.446236    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:14.446243    6560 round_trippers.go:580]     Audit-Id: cada5354-d061-4653-a838-4419dab7de89
	I1212 12:18:14.446250    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:14.446256    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:14.446262    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:14.446272    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:14.446278    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:14 GMT
	I1212 12:18:14.446361    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:14.446756    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:14.446765    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:14.446773    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:14.446780    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:14.448217    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:14.448226    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:14.448231    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:14.448236    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:14 GMT
	I1212 12:18:14.448241    6560 round_trippers.go:580]     Audit-Id: 114d8eee-b660-467e-aaeb-3642456f400c
	I1212 12:18:14.448245    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:14.448249    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:14.448254    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:14.448414    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:14.448592    6560 pod_ready.go:102] pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace has status "Ready":"False"
	I1212 12:18:14.943893    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:14.943909    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:14.943916    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:14.943921    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:14.945845    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:14.945855    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:14.945861    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:15 GMT
	I1212 12:18:14.945867    6560 round_trippers.go:580]     Audit-Id: e6f60e1a-0172-4da9-bc20-429c83872bcb
	I1212 12:18:14.945874    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:14.945881    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:14.945889    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:14.945894    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:14.946049    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:14.946329    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:14.946336    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:14.946342    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:14.946347    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:14.947913    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:14.947925    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:14.947938    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:14.947946    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:14.947951    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:14.947956    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:14.947961    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:15 GMT
	I1212 12:18:14.947966    6560 round_trippers.go:580]     Audit-Id: bf6b64d2-eb03-4f53-a7be-eebb7e6b90c3
	I1212 12:18:14.948185    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:15.444318    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:15.444337    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:15.444350    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:15.444359    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:15.447031    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:15.447046    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:15.447057    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:15 GMT
	I1212 12:18:15.447067    6560 round_trippers.go:580]     Audit-Id: 8621a8d7-06e6-4811-b47e-da87742241de
	I1212 12:18:15.447077    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:15.447086    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:15.447094    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:15.447106    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:15.447360    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:15.447646    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:15.447655    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:15.447660    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:15.447666    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:15.449054    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:15.449066    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:15.449071    6560 round_trippers.go:580]     Audit-Id: 21f64c63-ad1b-47ba-8bd5-0fb6d59e22ef
	I1212 12:18:15.449076    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:15.449080    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:15.449085    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:15.449090    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:15.449095    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:15 GMT
	I1212 12:18:15.449171    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:15.943473    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:15.943555    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:15.943577    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:15.943590    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:15.946335    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:15.946348    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:15.946356    6560 round_trippers.go:580]     Audit-Id: 0236ee43-05ef-49a6-a87e-d67d30375b04
	I1212 12:18:15.946382    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:15.946394    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:15.946413    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:15.946419    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:15.946427    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:16 GMT
	I1212 12:18:15.946547    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:15.946914    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:15.946923    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:15.946932    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:15.946939    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:15.948394    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:15.948410    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:15.948443    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:15.948473    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:15.948480    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:15.948485    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:16 GMT
	I1212 12:18:15.948489    6560 round_trippers.go:580]     Audit-Id: 4f30496c-0778-423d-8d9b-500c5910929e
	I1212 12:18:15.948494    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:15.948638    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:16.443164    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:16.443177    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:16.443183    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:16.443188    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:16.444818    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:16.444830    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:16.444836    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:16.444845    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:16.444851    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:16.444855    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:16.444859    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:16 GMT
	I1212 12:18:16.444865    6560 round_trippers.go:580]     Audit-Id: b4656bdd-fd23-4936-9962-fc8098de63f2
	I1212 12:18:16.444964    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:16.445252    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:16.445260    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:16.445266    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:16.445273    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:16.446609    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:16.446616    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:16.446621    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:16.446626    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:16.446630    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:16 GMT
	I1212 12:18:16.446635    6560 round_trippers.go:580]     Audit-Id: f611d419-13e1-4837-acba-8e95546d7d9e
	I1212 12:18:16.446641    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:16.446648    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:16.446824    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:16.943106    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:16.943155    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:16.943162    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:16.943169    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:16.945050    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:16.945063    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:16.945079    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:16.945087    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:16.945096    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:16.945102    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:17 GMT
	I1212 12:18:16.945106    6560 round_trippers.go:580]     Audit-Id: 72a1a515-0391-4fcd-978f-17a680efc7d8
	I1212 12:18:16.945111    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:16.945186    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:16.945464    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:16.945472    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:16.945477    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:16.945482    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:16.946699    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:16.946708    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:16.946714    6560 round_trippers.go:580]     Audit-Id: f573425c-e362-4fc7-8696-c679e773a11c
	I1212 12:18:16.946719    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:16.946724    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:16.946733    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:16.946740    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:16.946750    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:17 GMT
	I1212 12:18:16.946903    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:16.947089    6560 pod_ready.go:102] pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace has status "Ready":"False"
	I1212 12:18:17.443696    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:17.443708    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:17.443714    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:17.443719    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:17.445442    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:17.445453    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:17.445459    6560 round_trippers.go:580]     Audit-Id: 29c2b9ee-469a-4d4e-a97b-2fcfb2dacaf0
	I1212 12:18:17.445465    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:17.445470    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:17.445474    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:17.445479    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:17.445483    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:17 GMT
	I1212 12:18:17.445554    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:17.445849    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:17.445856    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:17.445862    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:17.445866    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:17.447088    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:17.447099    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:17.447106    6560 round_trippers.go:580]     Audit-Id: adf4f679-bc1e-4894-9fdd-ec1df1585bdb
	I1212 12:18:17.447114    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:17.447122    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:17.447129    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:17.447134    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:17.447139    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:17 GMT
	I1212 12:18:17.447234    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:17.943292    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:17.943315    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:17.943324    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:17.943331    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:17.945805    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:17.945819    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:17.945841    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:17.945857    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:18 GMT
	I1212 12:18:17.945867    6560 round_trippers.go:580]     Audit-Id: b8024e23-9731-4fa2-9061-df03fb5c4c92
	I1212 12:18:17.945877    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:17.945896    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:17.945908    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:17.946007    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:17.946322    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:17.946331    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:17.946337    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:17.946343    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:17.947701    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:17.947708    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:17.947714    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:17.947723    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:17.947729    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:17.947736    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:17.947741    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:18 GMT
	I1212 12:18:17.947745    6560 round_trippers.go:580]     Audit-Id: e063e220-747b-4158-a9cc-1da2bcf02e68
	I1212 12:18:17.947952    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:18.444058    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:18.444091    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.444104    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.444114    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.447675    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:18:18.447693    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.447701    6560 round_trippers.go:580]     Audit-Id: b3644118-52f1-4b23-a5e3-c3fdeff00464
	I1212 12:18:18.447707    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.447714    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.447720    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.447735    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.447743    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:18 GMT
	I1212 12:18:18.447938    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"466","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6541 chars]
	I1212 12:18:18.448323    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:18.448338    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.448346    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.448353    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.449867    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:18.449875    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.449880    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:18 GMT
	I1212 12:18:18.449885    6560 round_trippers.go:580]     Audit-Id: 7711ce19-dfa4-4b60-bbd4-e419637632bf
	I1212 12:18:18.449889    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.449894    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.449898    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.449902    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.450109    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:18.943613    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2qgqq
	I1212 12:18:18.943629    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.943638    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.943645    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.946368    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:18.946383    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.946391    6560 round_trippers.go:580]     Audit-Id: 3440ce36-90db-4ee5-86fc-849b600fa3e2
	I1212 12:18:18.946398    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.946406    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.946413    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.946420    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.946428    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:18.946670    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"565","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6489 chars]
	I1212 12:18:18.946982    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:18.946989    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.946996    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.947001    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.948764    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:18.948776    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.948782    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.948786    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.948799    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:18.948804    6560 round_trippers.go:580]     Audit-Id: 440346fd-808a-4b84-afa2-bfe00a837dc6
	I1212 12:18:18.948812    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.948817    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.949066    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:18.949259    6560 pod_ready.go:92] pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace has status "Ready":"True"
	I1212 12:18:18.949269    6560 pod_ready.go:81] duration metric: took 9.014564999s waiting for pod "coredns-5dd5756b68-2qgqq" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:18.949276    6560 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:18.949307    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-675000
	I1212 12:18:18.949312    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.949318    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.949323    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.951024    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:18.951037    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.951043    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.951048    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.951054    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:18.951058    6560 round_trippers.go:580]     Audit-Id: 2674172f-3781-48cf-b60d-c66a351dd7cf
	I1212 12:18:18.951068    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.951095    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.951253    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-675000","namespace":"kube-system","uid":"bca57b7b-a960-4492-8f79-e6f8aa87f070","resourceVersion":"557","creationTimestamp":"2023-12-12T20:16:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"b8a6875b46c6a0a1242452e56d9fe808","kubernetes.io/config.mirror":"b8a6875b46c6a0a1242452e56d9fe808","kubernetes.io/config.seen":"2023-12-12T20:16:44.273254977Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6061 chars]
	I1212 12:18:18.951667    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:18.951675    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.951684    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.951689    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.953699    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:18.953708    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.953714    6560 round_trippers.go:580]     Audit-Id: 08b5ca47-e8e2-4668-926f-a1c98f950e8c
	I1212 12:18:18.953718    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.953723    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.953728    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.953733    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.953750    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:18.953882    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:18.954063    6560 pod_ready.go:92] pod "etcd-multinode-675000" in "kube-system" namespace has status "Ready":"True"
	I1212 12:18:18.954071    6560 pod_ready.go:81] duration metric: took 4.790517ms waiting for pod "etcd-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:18.954083    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:18.954113    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-675000
	I1212 12:18:18.954118    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.954124    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.954129    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.956125    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:18.956144    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.956152    6560 round_trippers.go:580]     Audit-Id: 5913e9af-dac3-4cf0-be5b-7d48a461cd9e
	I1212 12:18:18.956158    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.956163    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.956168    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.956173    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.956178    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:18.956449    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-675000","namespace":"kube-system","uid":"8c377a02-06d4-44e2-a275-5a72e7917a90","resourceVersion":"546","creationTimestamp":"2023-12-12T20:16:51Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"a93d2462fc4179c4ac4fea222dfb096b","kubernetes.io/config.mirror":"a93d2462fc4179c4ac4fea222dfb096b","kubernetes.io/config.seen":"2023-12-12T20:16:51.301865289Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7597 chars]
	I1212 12:18:18.956721    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:18.956729    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.956735    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.956740    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.958508    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:18.958521    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.958529    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.958537    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:18.958545    6560 round_trippers.go:580]     Audit-Id: df1048b9-00a4-4196-8fad-48e9c85a248e
	I1212 12:18:18.958553    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.958560    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.958568    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.958711    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:18.958988    6560 pod_ready.go:92] pod "kube-apiserver-multinode-675000" in "kube-system" namespace has status "Ready":"True"
	I1212 12:18:18.958999    6560 pod_ready.go:81] duration metric: took 4.910102ms waiting for pod "kube-apiserver-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:18.959019    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:18.959074    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-675000
	I1212 12:18:18.959081    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.959090    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.959099    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.960731    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:18.960741    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.960753    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.960762    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.960797    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.960805    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:18.960810    6560 round_trippers.go:580]     Audit-Id: 37bf7f72-2e07-47b6-b651-117a4529d79a
	I1212 12:18:18.960831    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.960929    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-675000","namespace":"kube-system","uid":"d99bab41-1594-4f91-b6cf-63f143cbd1fb","resourceVersion":"543","creationTimestamp":"2023-12-12T20:16:51Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e889149f3645071732e65c53e76071e","kubernetes.io/config.mirror":"8e889149f3645071732e65c53e76071e","kubernetes.io/config.seen":"2023-12-12T20:16:51.301865920Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7170 chars]
	I1212 12:18:18.961208    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:18.961216    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.961222    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.961227    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.962483    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:18.962490    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.962495    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.962500    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.962504    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.962508    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.962514    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:18.962521    6560 round_trippers.go:580]     Audit-Id: 7d380f3a-99ff-4213-84a6-2f91488033d8
	I1212 12:18:18.962613    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:18.962801    6560 pod_ready.go:92] pod "kube-controller-manager-multinode-675000" in "kube-system" namespace has status "Ready":"True"
	I1212 12:18:18.962812    6560 pod_ready.go:81] duration metric: took 3.781478ms waiting for pod "kube-controller-manager-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:18.962819    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q4dfx" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:18.962855    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q4dfx
	I1212 12:18:18.962860    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.962866    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.962872    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.964139    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:18.964151    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.964159    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.964167    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.964175    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:18.964180    6560 round_trippers.go:580]     Audit-Id: 6056fd97-ce44-4e86-b8e5-8c0392b29595
	I1212 12:18:18.964185    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.964190    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.964304    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q4dfx","generateName":"kube-proxy-","namespace":"kube-system","uid":"2a62b5cc-b780-4ef5-8663-4a01ca0e2932","resourceVersion":"474","creationTimestamp":"2023-12-12T20:17:03Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5e692c0d-042c-458d-9e34-28feed1938bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e692c0d-042c-458d-9e34-28feed1938bc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5734 chars]
	I1212 12:18:18.964575    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:18.964582    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:18.964588    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:18.964594    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:18.966004    6560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 12:18:18.966011    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:18.966016    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:18.966020    6560 round_trippers.go:580]     Audit-Id: f722590d-67dc-4bf7-a0ef-919d4fdf2717
	I1212 12:18:18.966048    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:18.966068    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:18.966073    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:18.966079    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:18.966167    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:18.966336    6560 pod_ready.go:92] pod "kube-proxy-q4dfx" in "kube-system" namespace has status "Ready":"True"
	I1212 12:18:18.966344    6560 pod_ready.go:81] duration metric: took 3.520685ms waiting for pod "kube-proxy-q4dfx" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:18.966359    6560 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:19.143770    6560 request.go:629] Waited for 177.381472ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-675000
	I1212 12:18:19.143858    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-675000
	I1212 12:18:19.143864    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:19.143886    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:19.143894    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:19.145948    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:19.145959    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:19.145964    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:19.145969    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:19.145974    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:19.145983    6560 round_trippers.go:580]     Audit-Id: ec43dd7e-50ff-4248-9ef1-8862ad63b3e6
	I1212 12:18:19.145988    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:19.145993    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:19.146177    6560 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-675000","namespace":"kube-system","uid":"a51d1149-64de-4c6e-a8ae-d04d45097278","resourceVersion":"540","creationTimestamp":"2023-12-12T20:16:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"94c171efd7c72f0a76d945c5e6e993d1","kubernetes.io/config.mirror":"94c171efd7c72f0a76d945c5e6e993d1","kubernetes.io/config.seen":"2023-12-12T20:16:51.301860165Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4900 chars]
	I1212 12:18:19.344658    6560 request.go:629] Waited for 198.235478ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:19.356348    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-675000
	I1212 12:18:19.356359    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:19.356371    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:19.356388    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:19.359573    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:18:19.359592    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:19.359600    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:19.359609    6560 round_trippers.go:580]     Audit-Id: 3ed4aeed-e42e-42fb-aba9-96d7380f842f
	I1212 12:18:19.359617    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:19.359632    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:19.359640    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:19.359646    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:19.359762    6560 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-12T20:16:48Z","fieldsType":"FieldsV1","fi [truncated 5003 chars]
	I1212 12:18:19.360036    6560 pod_ready.go:92] pod "kube-scheduler-multinode-675000" in "kube-system" namespace has status "Ready":"True"
	I1212 12:18:19.360048    6560 pod_ready.go:81] duration metric: took 393.688814ms waiting for pod "kube-scheduler-multinode-675000" in "kube-system" namespace to be "Ready" ...
	I1212 12:18:19.360057    6560 pod_ready.go:38] duration metric: took 9.429775722s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 12:18:19.360075    6560 api_server.go:52] waiting for apiserver process to appear ...
	I1212 12:18:19.360131    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:18:19.370351    6560 command_runner.go:130] > 1753
	I1212 12:18:19.370372    6560 api_server.go:72] duration metric: took 14.325685861s to wait for apiserver process to appear ...
	I1212 12:18:19.370378    6560 api_server.go:88] waiting for apiserver healthz status ...
	I1212 12:18:19.370387    6560 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I1212 12:18:19.373809    6560 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I1212 12:18:19.373844    6560 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I1212 12:18:19.373849    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:19.373856    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:19.373861    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:19.374562    6560 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 12:18:19.374571    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:19.374577    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:19.374582    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:19.374587    6560 round_trippers.go:580]     Content-Length: 264
	I1212 12:18:19.374592    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:19.374597    6560 round_trippers.go:580]     Audit-Id: 84531607-974f-4b6e-84ba-cf3d984f54ff
	I1212 12:18:19.374601    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:19.374610    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:19.374629    6560 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1212 12:18:19.374657    6560 api_server.go:141] control plane version: v1.28.4
	I1212 12:18:19.374665    6560 api_server.go:131] duration metric: took 4.2838ms to wait for apiserver health ...
	I1212 12:18:19.374670    6560 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 12:18:19.544272    6560 request.go:629] Waited for 169.545601ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:18:19.544303    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:18:19.544308    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:19.544315    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:19.544321    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:19.547636    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:18:19.547662    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:19.547669    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:19.547673    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:19.547678    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:19.547683    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:19.547710    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:19.547714    6560 round_trippers.go:580]     Audit-Id: 65aefba7-7da6-4ef5-8736-1d8ccfccb2a7
	I1212 12:18:19.548202    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"569"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"565","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55696 chars]
	I1212 12:18:19.549627    6560 system_pods.go:59] 8 kube-system pods found
	I1212 12:18:19.549638    6560 system_pods.go:61] "coredns-5dd5756b68-2qgqq" [6bc47af7-f871-4daa-97ca-23500d80fc1b] Running
	I1212 12:18:19.549642    6560 system_pods.go:61] "etcd-multinode-675000" [bca57b7b-a960-4492-8f79-e6f8aa87f070] Running
	I1212 12:18:19.549646    6560 system_pods.go:61] "kindnet-4vq6m" [c528f3f9-a180-497c-892d-0305174740c9] Running
	I1212 12:18:19.549649    6560 system_pods.go:61] "kube-apiserver-multinode-675000" [8c377a02-06d4-44e2-a275-5a72e7917a90] Running
	I1212 12:18:19.549653    6560 system_pods.go:61] "kube-controller-manager-multinode-675000" [d99bab41-1594-4f91-b6cf-63f143cbd1fb] Running
	I1212 12:18:19.549657    6560 system_pods.go:61] "kube-proxy-q4dfx" [2a62b5cc-b780-4ef5-8663-4a01ca0e2932] Running
	I1212 12:18:19.549660    6560 system_pods.go:61] "kube-scheduler-multinode-675000" [a51d1149-64de-4c6e-a8ae-d04d45097278] Running
	I1212 12:18:19.549664    6560 system_pods.go:61] "storage-provisioner" [6f39d754-bc48-49e5-a0e4-fda2cbf521b7] Running
	I1212 12:18:19.549668    6560 system_pods.go:74] duration metric: took 174.997151ms to wait for pod list to return data ...
	I1212 12:18:19.549700    6560 default_sa.go:34] waiting for default service account to be created ...
	I1212 12:18:19.743973    6560 request.go:629] Waited for 194.194144ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I1212 12:18:19.744020    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I1212 12:18:19.744028    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:19.744037    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:19.744044    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:19.746234    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:19.746244    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:19.746250    6560 round_trippers.go:580]     Audit-Id: 20e22a62-8add-477a-9020-f770b2ad00e9
	I1212 12:18:19.746255    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:19.746260    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:19.746264    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:19.746269    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:19.746276    6560 round_trippers.go:580]     Content-Length: 261
	I1212 12:18:19.746281    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:19 GMT
	I1212 12:18:19.746292    6560 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"569"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0a0d85bc-4ea5-43ff-8744-122b952a826b","resourceVersion":"354","creationTimestamp":"2023-12-12T20:17:04Z"}}]}
	I1212 12:18:19.746405    6560 default_sa.go:45] found service account: "default"
	I1212 12:18:19.746415    6560 default_sa.go:55] duration metric: took 196.712434ms for default service account to be created ...
	I1212 12:18:19.746420    6560 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 12:18:19.945536    6560 request.go:629] Waited for 199.065291ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:18:19.945614    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I1212 12:18:19.945625    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:19.945637    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:19.945647    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:19.949414    6560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 12:18:19.949432    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:19.949442    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:19.949451    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:19.949458    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:20 GMT
	I1212 12:18:19.949463    6560 round_trippers.go:580]     Audit-Id: c28d35ea-3b9f-48d8-850c-108e4d139e2c
	I1212 12:18:19.949467    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:19.949472    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:19.949953    6560 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"569"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2qgqq","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"6bc47af7-f871-4daa-97ca-23500d80fc1b","resourceVersion":"565","creationTimestamp":"2023-12-12T20:17:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"d4419174-5cd1-4622-956f-c56de30be073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-12T20:17:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4419174-5cd1-4622-956f-c56de30be073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55696 chars]
	I1212 12:18:19.951296    6560 system_pods.go:86] 8 kube-system pods found
	I1212 12:18:19.951307    6560 system_pods.go:89] "coredns-5dd5756b68-2qgqq" [6bc47af7-f871-4daa-97ca-23500d80fc1b] Running
	I1212 12:18:19.951312    6560 system_pods.go:89] "etcd-multinode-675000" [bca57b7b-a960-4492-8f79-e6f8aa87f070] Running
	I1212 12:18:19.951339    6560 system_pods.go:89] "kindnet-4vq6m" [c528f3f9-a180-497c-892d-0305174740c9] Running
	I1212 12:18:19.951347    6560 system_pods.go:89] "kube-apiserver-multinode-675000" [8c377a02-06d4-44e2-a275-5a72e7917a90] Running
	I1212 12:18:19.951351    6560 system_pods.go:89] "kube-controller-manager-multinode-675000" [d99bab41-1594-4f91-b6cf-63f143cbd1fb] Running
	I1212 12:18:19.951356    6560 system_pods.go:89] "kube-proxy-q4dfx" [2a62b5cc-b780-4ef5-8663-4a01ca0e2932] Running
	I1212 12:18:19.951361    6560 system_pods.go:89] "kube-scheduler-multinode-675000" [a51d1149-64de-4c6e-a8ae-d04d45097278] Running
	I1212 12:18:19.951369    6560 system_pods.go:89] "storage-provisioner" [6f39d754-bc48-49e5-a0e4-fda2cbf521b7] Running
	I1212 12:18:19.951376    6560 system_pods.go:126] duration metric: took 204.955351ms to wait for k8s-apps to be running ...
	I1212 12:18:19.951382    6560 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 12:18:19.951423    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 12:18:19.962181    6560 system_svc.go:56] duration metric: took 10.792883ms WaitForService to wait for kubelet.
	I1212 12:18:19.962195    6560 kubeadm.go:581] duration metric: took 14.917519145s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1212 12:18:19.962207    6560 node_conditions.go:102] verifying NodePressure condition ...
	I1212 12:18:20.143834    6560 request.go:629] Waited for 181.586046ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I1212 12:18:20.143938    6560 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I1212 12:18:20.143953    6560 round_trippers.go:469] Request Headers:
	I1212 12:18:20.143968    6560 round_trippers.go:473]     Accept: application/json, */*
	I1212 12:18:20.143984    6560 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1212 12:18:20.146958    6560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 12:18:20.146979    6560 round_trippers.go:577] Response Headers:
	I1212 12:18:20.146993    6560 round_trippers.go:580]     Audit-Id: 3cb0a20f-2f40-4c8c-b0bb-07268e59486f
	I1212 12:18:20.147004    6560 round_trippers.go:580]     Cache-Control: no-cache, private
	I1212 12:18:20.147011    6560 round_trippers.go:580]     Content-Type: application/json
	I1212 12:18:20.147030    6560 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0c65aa88-6a01-4025-bda3-574d79dea0ee
	I1212 12:18:20.147038    6560 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5d10954-26ac-4771-a7b7-0b43ad2377b4
	I1212 12:18:20.147044    6560 round_trippers.go:580]     Date: Tue, 12 Dec 2023 20:18:20 GMT
	I1212 12:18:20.147262    6560 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"569"},"items":[{"metadata":{"name":"multinode-675000","uid":"6eab4d85-c3dc-44b6-8086-978fceb1bbec","resourceVersion":"544","creationTimestamp":"2023-12-12T20:16:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-675000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"bbafb8443bb801a11d242513c0872b48bb9d80e1","minikube.k8s.io/name":"multinode-675000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_12T12_16_52_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5056 chars]
	I1212 12:18:20.147522    6560 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 12:18:20.147536    6560 node_conditions.go:123] node cpu capacity is 2
	I1212 12:18:20.147544    6560 node_conditions.go:105] duration metric: took 185.336799ms to run NodePressure ...
	I1212 12:18:20.147554    6560 start.go:228] waiting for startup goroutines ...
	I1212 12:18:20.147561    6560 start.go:233] waiting for cluster config update ...
	I1212 12:18:20.147575    6560 start.go:242] writing updated cluster config ...
	I1212 12:18:20.147986    6560 ssh_runner.go:195] Run: rm -f paused
	I1212 12:18:20.188240    6560 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
	I1212 12:18:20.232794    6560 out.go:177] * Done! kubectl is now configured to use "multinode-675000" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2023-12-12 20:17:37 UTC, ends at Tue 2023-12-12 20:18:21 UTC. --
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.191909677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:02 multinode-675000 cri-dockerd[1024]: time="2023-12-12T20:18:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/92f29a96ae4144d6c320eca42d930561bab97cd5b7de520b97fef4e69c5e514b/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.347561453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.348092272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.348216467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.348272763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:02 multinode-675000 cri-dockerd[1024]: time="2023-12-12T20:18:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/075926302b8a7db5b17e4029f40a3ea644500efa4bda04436ff86e1d0b6bd7c1/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.828687046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.829493222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.829535289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.829545935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:04 multinode-675000 cri-dockerd[1024]: time="2023-12-12T20:18:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc0b30a73c66dd1b745bdb2bcedf1caf4be4063ee094ccb10af19d2aaed40549/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 20:18:04 multinode-675000 dockerd[821]: time="2023-12-12T20:18:04.802308848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:18:04 multinode-675000 dockerd[821]: time="2023-12-12T20:18:04.802403506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:04 multinode-675000 dockerd[821]: time="2023-12-12T20:18:04.802424033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:18:04 multinode-675000 dockerd[821]: time="2023-12-12T20:18:04.802724069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.489204263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.489265671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.489310914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.489690212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:17 multinode-675000 cri-dockerd[1024]: time="2023-12-12T20:18:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1ac252978e0000c3a34bc770a591308d8d8eb559aafc048526a2323dace4e385/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.866327711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.866399495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.866418710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.866428882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	40abbd6ba5851       ead0a4a53df89                                                                              4 seconds ago        Running             coredns                   1                   1ac252978e000       coredns-5dd5756b68-2qgqq
	9420be4a7a64d       c7d1297425461                                                                              17 seconds ago       Running             kindnet-cni               1                   bc0b30a73c66d       kindnet-4vq6m
	d9e94810ceb68       6e38f40d628db                                                                              19 seconds ago       Running             storage-provisioner       1                   075926302b8a7       storage-provisioner
	2c9cb416955ce       83f6cc407eed8                                                                              19 seconds ago       Running             kube-proxy                1                   92f29a96ae414       kube-proxy-q4dfx
	13a33f6b88010       73deb9a3f7025                                                                              23 seconds ago       Running             etcd                      1                   c76d4e0618a55       etcd-multinode-675000
	fb02933e38d84       7fe0e6f37db33                                                                              23 seconds ago       Running             kube-apiserver            1                   660eb1a0b7c78       kube-apiserver-multinode-675000
	10a8d5eab4494       d058aa5ab969c                                                                              24 seconds ago       Running             kube-controller-manager   1                   346cfc6369ea4       kube-controller-manager-multinode-675000
	6e2edde92c79a       e3db313c6dbc0                                                                              24 seconds ago       Running             kube-scheduler            1                   75aadb61316a2       kube-scheduler-multinode-675000
	5139a190a0a70       ead0a4a53df89                                                                              About a minute ago   Exited              coredns                   0                   906956fbad371       coredns-5dd5756b68-2qgqq
	0b9a6a315baee       6e38f40d628db                                                                              About a minute ago   Exited              storage-provisioner       0                   6800e8084788d       storage-provisioner
	a391a1302e24d       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   About a minute ago   Exited              kindnet-cni               0                   c6f5291d5248b       kindnet-4vq6m
	5c4ec41a543b9       83f6cc407eed8                                                                              About a minute ago   Exited              kube-proxy                0                   c4d605b91fefd       kube-proxy-q4dfx
	ec1ccfe051cf8       e3db313c6dbc0                                                                              About a minute ago   Exited              kube-scheduler            0                   ec16ed8743035       kube-scheduler-multinode-675000
	0dfb53ca11626       73deb9a3f7025                                                                              About a minute ago   Exited              etcd                      0                   759eb904c17af       etcd-multinode-675000
	2e3863acd67e9       d058aa5ab969c                                                                              About a minute ago   Exited              kube-controller-manager   0                   5365eadc60c2d       kube-controller-manager-multinode-675000
	6a5980fcc6dc9       7fe0e6f37db33                                                                              About a minute ago   Exited              kube-apiserver            0                   32f46c3efb2c7       kube-apiserver-multinode-675000
	
	
	==> coredns [40abbd6ba585] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46491 - 57509 "HINFO IN 5857152611344469365.3469466211013647927. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014178077s
	
	
	==> coredns [5139a190a0a7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39661 - 33289 "HINFO IN 696511843846326458.4911786665791153147. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.013651973s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-675000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-675000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=multinode-675000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T12_16_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 20:16:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-675000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 20:18:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:18:09 +0000   Tue, 12 Dec 2023 20:16:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:18:09 +0000   Tue, 12 Dec 2023 20:16:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:18:09 +0000   Tue, 12 Dec 2023 20:16:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:18:09 +0000   Tue, 12 Dec 2023 20:18:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-675000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7d644f703be46d69b715610990df26b
	  System UUID:                fbe411ee-0000-0000-b1fb-f01898ef957c
	  Boot ID:                    ce0e12ea-fd35-4f70-958b-a5f29488f39c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-2qgqq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     77s
	  kube-system                 etcd-multinode-675000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         91s
	  kube-system                 kindnet-4vq6m                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      78s
	  kube-system                 kube-apiserver-multinode-675000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-controller-manager-multinode-675000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-proxy-q4dfx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-multinode-675000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 97s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s (x8 over 97s)  kubelet          Node multinode-675000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x8 over 97s)  kubelet          Node multinode-675000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x7 over 97s)  kubelet          Node multinode-675000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node multinode-675000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node multinode-675000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     90s                kubelet          Node multinode-675000 status is now: NodeHasSufficientPID
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           78s                node-controller  Node multinode-675000 event: Registered Node multinode-675000 in Controller
	  Normal  NodeReady                68s                kubelet          Node multinode-675000 status is now: NodeReady
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node multinode-675000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node multinode-675000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node multinode-675000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node multinode-675000 event: Registered Node multinode-675000 in Controller
	
	
	==> dmesg <==
	[  +0.029205] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +5.116212] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007470] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.956604] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.039372] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.932731] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.063668] systemd-fstab-generator[511]: Ignoring "noauto" for root device
	[  +0.096374] systemd-fstab-generator[522]: Ignoring "noauto" for root device
	[  +0.749987] systemd-fstab-generator[733]: Ignoring "noauto" for root device
	[  +0.237846] systemd-fstab-generator[772]: Ignoring "noauto" for root device
	[  +0.103960] systemd-fstab-generator[783]: Ignoring "noauto" for root device
	[  +0.099137] systemd-fstab-generator[796]: Ignoring "noauto" for root device
	[  +1.234447] kauditd_printk_skb: 30 callbacks suppressed
	[  +0.152778] systemd-fstab-generator[969]: Ignoring "noauto" for root device
	[  +0.084812] systemd-fstab-generator[980]: Ignoring "noauto" for root device
	[  +0.083836] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[  +0.095398] systemd-fstab-generator[1002]: Ignoring "noauto" for root device
	[  +0.114848] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[ +11.809679] systemd-fstab-generator[1260]: Ignoring "noauto" for root device
	[  +0.269328] kauditd_printk_skb: 29 callbacks suppressed
	[Dec12 20:18] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [0dfb53ca1162] <==
	{"level":"info","ts":"2023-12-12T20:16:46.212171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T20:16:46.212178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T20:16:46.212184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T20:16:46.21448Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.214911Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-675000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T20:16:46.215052Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.215333Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.215433Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.215068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:16:46.219011Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2023-12-12T20:16:46.215082Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:16:46.2198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T20:16:46.223336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T20:16:46.223372Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T20:16:48.80496Z","caller":"traceutil/trace.go:171","msg":"trace[1901849977] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"101.053217ms","start":"2023-12-12T20:16:48.703894Z","end":"2023-12-12T20:16:48.804947Z","steps":["trace[1901849977] 'process raft request'  (duration: 60.497972ms)","trace[1901849977] 'compare'  (duration: 40.496633ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T20:17:21.184189Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-12T20:17:21.184253Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-675000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	{"level":"warn","ts":"2023-12-12T20:17:21.184303Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T20:17:21.184397Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T20:17:21.194803Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T20:17:21.194848Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-12T20:17:21.194906Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e0290fa3161c5471","current-leader-member-id":"e0290fa3161c5471"}
	{"level":"info","ts":"2023-12-12T20:17:21.19608Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T20:17:21.196155Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T20:17:21.196163Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-675000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	
	
	==> etcd [13a33f6b8801] <==
	{"level":"info","ts":"2023-12-12T20:17:58.526963Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T20:17:58.526985Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T20:17:58.528868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2023-12-12T20:17:58.529024Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2023-12-12T20:17:58.529391Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:17:58.529464Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:17:58.531567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T20:17:58.531655Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T20:17:58.531759Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T20:17:58.532106Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T20:17:58.532188Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T20:18:00.38618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T20:18:00.386265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T20:18:00.386345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T20:18:00.386362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T20:18:00.38637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2023-12-12T20:18:00.386382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 3"}
	{"level":"info","ts":"2023-12-12T20:18:00.386432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2023-12-12T20:18:00.388101Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:18:00.389449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2023-12-12T20:18:00.388035Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-675000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T20:18:00.390226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:18:00.3911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T20:18:00.395073Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T20:18:00.395201Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:18:22 up 0 min,  0 users,  load average: 0.70, 0.19, 0.06
	Linux multinode-675000 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [9420be4a7a64] <==
	I1212 20:18:05.123504       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 20:18:05.123587       1 main.go:107] hostIP = 192.169.0.13
	podIP = 192.169.0.13
	I1212 20:18:05.123961       1 main.go:116] setting mtu 1500 for CNI 
	I1212 20:18:05.123996       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 20:18:05.124018       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 20:18:05.426096       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:18:05.426142       1 main.go:227] handling current node
	I1212 20:18:15.436313       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:18:15.436349       1 main.go:227] handling current node
	
	
	==> kindnet [a391a1302e24] <==
	I1212 20:17:10.309204       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 20:17:10.309282       1 main.go:107] hostIP = 192.169.0.13
	podIP = 192.169.0.13
	I1212 20:17:10.309416       1 main.go:116] setting mtu 1500 for CNI 
	I1212 20:17:10.309458       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 20:17:10.309478       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 20:17:10.609013       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:17:10.609049       1 main.go:227] handling current node
	I1212 20:17:20.613618       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:17:20.613636       1 main.go:227] handling current node
	
	
	==> kube-apiserver [6a5980fcc6dc] <==
	W1212 20:17:22.191719       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191728       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191748       1 logging.go:59] [core] [Channel #13 SubChannel #15] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191760       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191776       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191791       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191803       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191820       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191832       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191850       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191862       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191887       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191891       1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191919       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191921       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191949       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191972       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191979       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192009       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192027       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192037       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192010       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192055       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191950       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192079       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fb02933e38d8] <==
	I1212 20:18:01.333189       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1212 20:18:01.363754       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 20:18:01.363910       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 20:18:01.420145       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:18:01.427472       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 20:18:01.428227       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 20:18:01.428280       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 20:18:01.429042       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 20:18:01.429872       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 20:18:01.430019       1 aggregator.go:166] initial CRD sync complete...
	I1212 20:18:01.430163       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 20:18:01.430189       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:18:01.430195       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:18:01.435066       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 20:18:01.442494       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 20:18:01.479079       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 20:18:02.335074       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1212 20:18:02.559015       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I1212 20:18:02.559823       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 20:18:02.565779       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:18:03.905205       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 20:18:03.996070       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 20:18:04.004028       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 20:18:04.042733       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:18:04.047213       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [10a8d5eab449] <==
	I1212 20:18:13.743624       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1212 20:18:13.743918       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1212 20:18:13.745184       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1212 20:18:13.746849       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1212 20:18:13.748631       1 shared_informer.go:318] Caches are synced for taint
	I1212 20:18:13.748895       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1212 20:18:13.749079       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-675000"
	I1212 20:18:13.749242       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1212 20:18:13.748911       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1212 20:18:13.749499       1 taint_manager.go:210] "Sending events to api server"
	I1212 20:18:13.749724       1 event.go:307] "Event occurred" object="multinode-675000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-675000 event: Registered Node multinode-675000 in Controller"
	I1212 20:18:13.806060       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1212 20:18:13.831230       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1212 20:18:13.848345       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1212 20:18:13.858199       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:18:13.861528       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1212 20:18:13.865749       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:18:13.913918       1 shared_informer.go:318] Caches are synced for cronjob
	I1212 20:18:13.932222       1 shared_informer.go:318] Caches are synced for job
	I1212 20:18:14.272655       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:18:14.272921       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 20:18:14.294370       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:18:18.952577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.857µs"
	I1212 20:18:18.978539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.676439ms"
	I1212 20:18:18.978756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="176.173µs"
	
	
	==> kube-controller-manager [2e3863acd67e] <==
	I1212 20:17:03.597170       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1212 20:17:03.603638       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4vq6m"
	I1212 20:17:03.603673       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q4dfx"
	I1212 20:17:03.632240       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1212 20:17:03.633170       1 shared_informer.go:318] Caches are synced for endpoint
	I1212 20:17:03.720712       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:17:03.784064       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:17:04.112473       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:17:04.180891       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:17:04.180925       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 20:17:04.387915       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 20:17:04.569116       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 20:17:04.628590       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-7ddxh"
	I1212 20:17:04.642387       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2qgqq"
	I1212 20:17:04.666378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="278.873535ms"
	I1212 20:17:04.675935       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-7ddxh"
	I1212 20:17:04.686219       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.811252ms"
	I1212 20:17:04.690363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="4.119126ms"
	I1212 20:17:04.690483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.544µs"
	I1212 20:17:13.476941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.916µs"
	I1212 20:17:13.496255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.815µs"
	I1212 20:17:13.582289       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 20:17:15.437619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.389µs"
	I1212 20:17:15.456455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.706842ms"
	I1212 20:17:15.456719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.35µs"
	
	
	==> kube-proxy [2c9cb416955c] <==
	I1212 20:18:02.663704       1 server_others.go:69] "Using iptables proxy"
	I1212 20:18:02.685147       1 node.go:141] Successfully retrieved node IP: 192.169.0.13
	I1212 20:18:02.741359       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 20:18:02.741431       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 20:18:02.743329       1 server_others.go:152] "Using iptables Proxier"
	I1212 20:18:02.744068       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 20:18:02.744393       1 server.go:846] "Version info" version="v1.28.4"
	I1212 20:18:02.744427       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:18:02.746456       1 config.go:188] "Starting service config controller"
	I1212 20:18:02.746796       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 20:18:02.746849       1 config.go:97] "Starting endpoint slice config controller"
	I1212 20:18:02.746854       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 20:18:02.749719       1 config.go:315] "Starting node config controller"
	I1212 20:18:02.749850       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 20:18:02.847975       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 20:18:02.848021       1 shared_informer.go:318] Caches are synced for service config
	I1212 20:18:02.850757       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [5c4ec41a543b] <==
	I1212 20:17:05.147381       1 server_others.go:69] "Using iptables proxy"
	I1212 20:17:05.156379       1 node.go:141] Successfully retrieved node IP: 192.169.0.13
	I1212 20:17:05.209428       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 20:17:05.209444       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 20:17:05.221145       1 server_others.go:152] "Using iptables Proxier"
	I1212 20:17:05.221199       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 20:17:05.221326       1 server.go:846] "Version info" version="v1.28.4"
	I1212 20:17:05.221358       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:17:05.222088       1 config.go:315] "Starting node config controller"
	I1212 20:17:05.222117       1 config.go:188] "Starting service config controller"
	I1212 20:17:05.222123       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 20:17:05.222134       1 config.go:97] "Starting endpoint slice config controller"
	I1212 20:17:05.222136       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 20:17:05.224472       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 20:17:05.325561       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 20:17:05.325623       1 shared_informer.go:318] Caches are synced for service config
	I1212 20:17:05.325803       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6e2edde92c79] <==
	I1212 20:17:58.495264       1 serving.go:348] Generated self-signed cert in-memory
	W1212 20:18:01.372969       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:18:01.373117       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:18:01.373212       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:18:01.373230       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:18:01.426207       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 20:18:01.426314       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:18:01.428502       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:18:01.429201       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 20:18:01.430440       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 20:18:01.430491       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 20:18:01.530029       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ec1ccfe051cf] <==
	E1212 20:16:48.670629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 20:16:48.670761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 20:16:48.670797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 20:16:48.670806       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 20:16:48.670812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 20:16:48.670822       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 20:16:48.670827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 20:16:48.671884       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 20:16:48.671978       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:16:49.523847       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 20:16:49.523874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 20:16:49.553927       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 20:16:49.553950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 20:16:49.572215       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 20:16:49.572242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 20:16:49.611936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 20:16:49.612040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 20:16:49.643230       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 20:16:49.643284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 20:16:49.760509       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 20:16:49.760528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1212 20:16:49.954233       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 20:17:21.142227       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1212 20:17:21.142282       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1212 20:17:21.142390       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 20:17:37 UTC, ends at Tue 2023-12-12 20:18:24 UTC. --
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: E1212 20:18:01.745754    1266 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-2qgqq" podUID="6bc47af7-f871-4daa-97ca-23500d80fc1b"
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: I1212 20:18:01.750515    1266 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: I1212 20:18:01.792943    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c528f3f9-a180-497c-892d-0305174740c9-lib-modules\") pod \"kindnet-4vq6m\" (UID: \"c528f3f9-a180-497c-892d-0305174740c9\") " pod="kube-system/kindnet-4vq6m"
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: I1212 20:18:01.793066    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a62b5cc-b780-4ef5-8663-4a01ca0e2932-lib-modules\") pod \"kube-proxy-q4dfx\" (UID: \"2a62b5cc-b780-4ef5-8663-4a01ca0e2932\") " pod="kube-system/kube-proxy-q4dfx"
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: I1212 20:18:01.793121    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c528f3f9-a180-497c-892d-0305174740c9-xtables-lock\") pod \"kindnet-4vq6m\" (UID: \"c528f3f9-a180-497c-892d-0305174740c9\") " pod="kube-system/kindnet-4vq6m"
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: I1212 20:18:01.793163    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a62b5cc-b780-4ef5-8663-4a01ca0e2932-xtables-lock\") pod \"kube-proxy-q4dfx\" (UID: \"2a62b5cc-b780-4ef5-8663-4a01ca0e2932\") " pod="kube-system/kube-proxy-q4dfx"
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: I1212 20:18:01.793204    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6f39d754-bc48-49e5-a0e4-fda2cbf521b7-tmp\") pod \"storage-provisioner\" (UID: \"6f39d754-bc48-49e5-a0e4-fda2cbf521b7\") " pod="kube-system/storage-provisioner"
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: I1212 20:18:01.793269    1266 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c528f3f9-a180-497c-892d-0305174740c9-cni-cfg\") pod \"kindnet-4vq6m\" (UID: \"c528f3f9-a180-497c-892d-0305174740c9\") " pod="kube-system/kindnet-4vq6m"
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: E1212 20:18:01.793823    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: E1212 20:18:01.794007    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume podName:6bc47af7-f871-4daa-97ca-23500d80fc1b nodeName:}" failed. No retries permitted until 2023-12-12 20:18:02.293956932 +0000 UTC m=+5.720183043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume") pod "coredns-5dd5756b68-2qgqq" (UID: "6bc47af7-f871-4daa-97ca-23500d80fc1b") : object "kube-system"/"coredns" not registered
	Dec 12 20:18:02 multinode-675000 kubelet[1266]: E1212 20:18:02.296380    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:18:02 multinode-675000 kubelet[1266]: E1212 20:18:02.296423    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume podName:6bc47af7-f871-4daa-97ca-23500d80fc1b nodeName:}" failed. No retries permitted until 2023-12-12 20:18:03.29641335 +0000 UTC m=+6.722639465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume") pod "coredns-5dd5756b68-2qgqq" (UID: "6bc47af7-f871-4daa-97ca-23500d80fc1b") : object "kube-system"/"coredns" not registered
	Dec 12 20:18:02 multinode-675000 kubelet[1266]: I1212 20:18:02.785003    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075926302b8a7db5b17e4029f40a3ea644500efa4bda04436ff86e1d0b6bd7c1"
	Dec 12 20:18:02 multinode-675000 kubelet[1266]: I1212 20:18:02.793611    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f29a96ae4144d6c320eca42d930561bab97cd5b7de520b97fef4e69c5e514b"
	Dec 12 20:18:03 multinode-675000 kubelet[1266]: E1212 20:18:03.302883    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:18:03 multinode-675000 kubelet[1266]: E1212 20:18:03.302977    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume podName:6bc47af7-f871-4daa-97ca-23500d80fc1b nodeName:}" failed. No retries permitted until 2023-12-12 20:18:05.302963013 +0000 UTC m=+8.729189124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume") pod "coredns-5dd5756b68-2qgqq" (UID: "6bc47af7-f871-4daa-97ca-23500d80fc1b") : object "kube-system"/"coredns" not registered
	Dec 12 20:18:04 multinode-675000 kubelet[1266]: E1212 20:18:04.748044    1266 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-2qgqq" podUID="6bc47af7-f871-4daa-97ca-23500d80fc1b"
	Dec 12 20:18:04 multinode-675000 kubelet[1266]: I1212 20:18:04.748877    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc0b30a73c66dd1b745bdb2bcedf1caf4be4063ee094ccb10af19d2aaed40549"
	Dec 12 20:18:05 multinode-675000 kubelet[1266]: E1212 20:18:05.318331    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:18:05 multinode-675000 kubelet[1266]: E1212 20:18:05.318396    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume podName:6bc47af7-f871-4daa-97ca-23500d80fc1b nodeName:}" failed. No retries permitted until 2023-12-12 20:18:09.31838649 +0000 UTC m=+12.744612603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume") pod "coredns-5dd5756b68-2qgqq" (UID: "6bc47af7-f871-4daa-97ca-23500d80fc1b") : object "kube-system"/"coredns" not registered
	Dec 12 20:18:06 multinode-675000 kubelet[1266]: E1212 20:18:06.805617    1266 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-2qgqq" podUID="6bc47af7-f871-4daa-97ca-23500d80fc1b"
	Dec 12 20:18:08 multinode-675000 kubelet[1266]: E1212 20:18:08.803857    1266 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-2qgqq" podUID="6bc47af7-f871-4daa-97ca-23500d80fc1b"
	Dec 12 20:18:09 multinode-675000 kubelet[1266]: E1212 20:18:09.346694    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:18:09 multinode-675000 kubelet[1266]: E1212 20:18:09.346735    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume podName:6bc47af7-f871-4daa-97ca-23500d80fc1b nodeName:}" failed. No retries permitted until 2023-12-12 20:18:17.346724802 +0000 UTC m=+20.772950916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume") pod "coredns-5dd5756b68-2qgqq" (UID: "6bc47af7-f871-4daa-97ca-23500d80fc1b") : object "kube-system"/"coredns" not registered
	Dec 12 20:18:09 multinode-675000 kubelet[1266]: I1212 20:18:09.814740    1266 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	
	
	==> storage-provisioner [0b9a6a315bae] <==
	I1212 20:17:14.312616       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:17:14.322893       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:17:14.322983       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 20:17:14.329967       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:17:14.330645       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e6fbc79-a02a-4f5f-82d7-de5fe00a9d7b", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-675000_2f2ccc3c-bbc3-49dd-b895-ab7f450e9251 became leader
	I1212 20:17:14.330724       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-675000_2f2ccc3c-bbc3-49dd-b895-ab7f450e9251!
	I1212 20:17:14.432075       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-675000_2f2ccc3c-bbc3-49dd-b895-ab7f450e9251!
	
	
	==> storage-provisioner [d9e94810ceb6] <==
	I1212 20:18:02.952354       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-675000 -n multinode-675000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-675000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (55.49s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (83.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-675000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-675000-m01 --driver=hyperkit 
multinode_test.go:480: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-675000-m01 --driver=hyperkit : (37.91671169s)
multinode_test.go:482: expected start profile command to fail. args "out/minikube-darwin-amd64 start -p multinode-675000-m01 --driver=hyperkit "
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-675000-m02 --driver=hyperkit 
E1212 12:19:17.102620    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-675000-m02 --driver=hyperkit : (36.283362997s)
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-675000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-675000: exit status 80 (270.149459ms)

                                                
                                                
-- stdout --
	* Adding node m02 to cluster multinode-675000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-675000-m02 already exists in multinode-675000-m02 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-675000-m02
multinode_test.go:500: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-675000-m02: (5.315405717s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-675000 -n multinode-675000
E1212 12:19:44.794217    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-675000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-675000 logs -n 25: (3.240257184s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:14 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:15 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:15 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | jsonpath='{.items[*].status.podIP}'  |                      |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                      |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- exec          | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | -- nslookup kubernetes.io            |                      |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- exec          | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | -- nslookup kubernetes.default       |                      |         |         |                     |                     |
	| kubectl | -p multinode-675000                  | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | -- exec  -- nslookup                 |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                      |         |         |                     |                     |
	| kubectl | -p multinode-675000 -- get pods -o   | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | jsonpath='{.items[*].metadata.name}' |                      |         |         |                     |                     |
	| node    | add -p multinode-675000 -v 3         | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	| node    | multinode-675000 node stop m03       | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	| node    | multinode-675000 node start          | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	|         | m03 --alsologtostderr                |                      |         |         |                     |                     |
	| node    | list -p multinode-675000             | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST |                     |
	| stop    | -p multinode-675000                  | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST | 12 Dec 23 12:16 PST |
	| start   | -p multinode-675000                  | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:16 PST | 12 Dec 23 12:17 PST |
	|         | --wait=true -v=8                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	| node    | list -p multinode-675000             | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:17 PST |                     |
	| node    | multinode-675000 node delete         | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:17 PST |                     |
	|         | m03                                  |                      |         |         |                     |                     |
	| stop    | multinode-675000 stop                | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:17 PST | 12 Dec 23 12:17 PST |
	| start   | -p multinode-675000                  | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:17 PST | 12 Dec 23 12:18 PST |
	|         | --wait=true -v=8                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| node    | list -p multinode-675000             | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:18 PST |                     |
	| start   | -p multinode-675000-m01              | multinode-675000-m01 | jenkins | v1.32.0 | 12 Dec 23 12:18 PST | 12 Dec 23 12:19 PST |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| start   | -p multinode-675000-m02              | multinode-675000-m02 | jenkins | v1.32.0 | 12 Dec 23 12:19 PST | 12 Dec 23 12:19 PST |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| node    | add -p multinode-675000              | multinode-675000     | jenkins | v1.32.0 | 12 Dec 23 12:19 PST |                     |
	| delete  | -p multinode-675000-m02              | multinode-675000-m02 | jenkins | v1.32.0 | 12 Dec 23 12:19 PST | 12 Dec 23 12:19 PST |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 12:19:02
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 12:19:02.805372    6715 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:19:02.805683    6715 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:19:02.805685    6715 out.go:309] Setting ErrFile to fd 2...
	I1212 12:19:02.805688    6715 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:19:02.805880    6715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:19:02.807548    6715 out.go:303] Setting JSON to false
	I1212 12:19:02.830423    6715 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2913,"bootTime":1702409429,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 12:19:02.830514    6715 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 12:19:02.852633    6715 out.go:177] * [multinode-675000-m02] minikube v1.32.0 on Darwin 14.2
	I1212 12:19:02.894364    6715 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 12:19:02.894448    6715 notify.go:220] Checking for updates...
	I1212 12:19:02.935969    6715 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:19:02.978171    6715 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 12:19:03.019122    6715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 12:19:03.061024    6715 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:19:03.082338    6715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 12:19:03.103783    6715 config.go:182] Loaded profile config "multinode-675000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:19:03.103895    6715 config.go:182] Loaded profile config "multinode-675000-m01": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:19:03.104004    6715 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 12:19:03.133052    6715 out.go:177] * Using the hyperkit driver based on user configuration
	I1212 12:19:03.175212    6715 start.go:298] selected driver: hyperkit
	I1212 12:19:03.175221    6715 start.go:902] validating driver "hyperkit" against <nil>
	I1212 12:19:03.175230    6715 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 12:19:03.175364    6715 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:19:03.175471    6715 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17734-1975/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 12:19:03.183630    6715 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 12:19:03.187903    6715 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:19:03.187926    6715 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 12:19:03.187962    6715 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 12:19:03.190721    6715 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1212 12:19:03.190865    6715 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 12:19:03.190910    6715 cni.go:84] Creating CNI manager for ""
	I1212 12:19:03.190923    6715 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 12:19:03.190933    6715 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 12:19:03.190942    6715 start_flags.go:323] config:
	{Name:multinode-675000-m02 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-675000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:19:03.191081    6715 iso.go:125] acquiring lock: {Name:mkd640d41cda61c79a7d2c2e38355d745b556a2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:19:03.213752    6715 out.go:177] * Starting control plane node multinode-675000-m02 in cluster multinode-675000-m02
	I1212 12:19:03.237127    6715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 12:19:03.237158    6715 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 12:19:03.237173    6715 cache.go:56] Caching tarball of preloaded images
	I1212 12:19:03.237266    6715 preload.go:174] Found /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 12:19:03.237272    6715 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 12:19:03.237362    6715 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/config.json ...
	I1212 12:19:03.237379    6715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/config.json: {Name:mkfc3fd97e54b649b9d5248f37c4fff4b98a7ef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:19:03.237728    6715 start.go:365] acquiring machines lock for multinode-675000-m02: {Name:mkcfb9a2794178bbcff953e64f7f6a3e3b1e9997 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 12:19:03.237780    6715 start.go:369] acquired machines lock for "multinode-675000-m02" in 42.623µs
	I1212 12:19:03.237805    6715 start.go:93] Provisioning new machine with config: &{Name:multinode-675000-m02 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-675000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 12:19:03.237852    6715 start.go:125] createHost starting for "" (driver="hyperkit")
	I1212 12:19:03.259146    6715 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I1212 12:19:03.259421    6715 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:19:03.259471    6715 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:19:03.267558    6715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51529
	I1212 12:19:03.267903    6715 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:19:03.268329    6715 main.go:141] libmachine: Using API Version  1
	I1212 12:19:03.268336    6715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:19:03.268582    6715 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:19:03.268723    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetMachineName
	I1212 12:19:03.268829    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:03.268927    6715 start.go:159] libmachine.API.Create for "multinode-675000-m02" (driver="hyperkit")
	I1212 12:19:03.268948    6715 client.go:168] LocalClient.Create starting
	I1212 12:19:03.268981    6715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem
	I1212 12:19:03.269025    6715 main.go:141] libmachine: Decoding PEM data...
	I1212 12:19:03.269037    6715 main.go:141] libmachine: Parsing certificate...
	I1212 12:19:03.269096    6715 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem
	I1212 12:19:03.269121    6715 main.go:141] libmachine: Decoding PEM data...
	I1212 12:19:03.269132    6715 main.go:141] libmachine: Parsing certificate...
	I1212 12:19:03.269143    6715 main.go:141] libmachine: Running pre-create checks...
	I1212 12:19:03.269151    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .PreCreateCheck
	I1212 12:19:03.269228    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:03.269448    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetConfigRaw
	I1212 12:19:03.280423    6715 main.go:141] libmachine: Creating machine...
	I1212 12:19:03.280454    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .Create
	I1212 12:19:03.280656    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:03.280828    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | I1212 12:19:03.280642    6723 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:19:03.280890    6715 main.go:141] libmachine: (multinode-675000-m02) Downloading /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17734-1975/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 12:19:03.505396    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | I1212 12:19:03.505327    6723 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/id_rsa...
	I1212 12:19:03.572995    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | I1212 12:19:03.572944    6723 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/multinode-675000-m02.rawdisk...
	I1212 12:19:03.573004    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Writing magic tar header
	I1212 12:19:03.573066    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Writing SSH key tar header
	I1212 12:19:03.573717    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | I1212 12:19:03.573686    6723 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02 ...
	I1212 12:19:04.008806    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:04.008823    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/hyperkit.pid
	I1212 12:19:04.008840    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Using UUID b1512122-992b-11ee-a00c-f01898ef957c
	I1212 12:19:04.036199    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Generated MAC ae:69:0:1f:74:78
	I1212 12:19:04.036215    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000-m02
	I1212 12:19:04.036255    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b1512122-992b-11ee-a00c-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00009f1d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I1212 12:19:04.036287    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b1512122-992b-11ee-a00c-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00009f1d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I1212 12:19:04.036334    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b1512122-992b-11ee-a00c-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/multinode-675000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/tty,log=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/bzimage,/Users/j
enkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000-m02"}
	I1212 12:19:04.036372    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b1512122-992b-11ee-a00c-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/multinode-675000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/tty,log=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/bzimage,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/mult
inode-675000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-675000-m02"
	I1212 12:19:04.036381    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1212 12:19:04.039463    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 DEBUG: hyperkit: Pid is 6724
	I1212 12:19:04.040467    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Attempt 0
	I1212 12:19:04.040482    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:04.040604    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | hyperkit pid from json: 6724
	I1212 12:19:04.041742    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Searching for ae:69:0:1f:74:78 in /var/db/dhcpd_leases ...
	I1212 12:19:04.041792    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I1212 12:19:04.041802    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:b0:b0:c6:4d:99 ID:1,9a:b0:b0:c6:4d:99 Lease:0x657a119b}
	I1212 12:19:04.041826    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a1162}
	I1212 12:19:04.041832    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:19:04.041838    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:19:04.041847    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:19:04.041853    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:19:04.041860    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:19:04.041866    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:19:04.041875    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:19:04.041883    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:19:04.041889    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:19:04.041894    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:19:04.041899    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:19:04.046872    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1212 12:19:04.057153    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1212 12:19:04.058190    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 12:19:04.058218    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 12:19:04.058231    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 12:19:04.058244    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 12:19:04.643460    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1212 12:19:04.643472    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1212 12:19:04.748547    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 12:19:04.748562    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 12:19:04.748570    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 12:19:04.748575    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 12:19:04.749422    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1212 12:19:04.749428    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1212 12:19:06.043780    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Attempt 1
	I1212 12:19:06.043790    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:06.043900    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | hyperkit pid from json: 6724
	I1212 12:19:06.044827    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Searching for ae:69:0:1f:74:78 in /var/db/dhcpd_leases ...
	I1212 12:19:06.044876    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I1212 12:19:06.044886    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:b0:b0:c6:4d:99 ID:1,9a:b0:b0:c6:4d:99 Lease:0x657a119b}
	I1212 12:19:06.044895    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a1162}
	I1212 12:19:06.044902    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:19:06.044908    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:19:06.044914    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:19:06.044919    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:19:06.044926    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:19:06.044934    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:19:06.044939    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:19:06.044954    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:19:06.044975    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:19:06.044982    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:19:06.044989    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:19:08.046698    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Attempt 2
	I1212 12:19:08.046713    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:08.046755    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | hyperkit pid from json: 6724
	I1212 12:19:08.047630    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Searching for ae:69:0:1f:74:78 in /var/db/dhcpd_leases ...
	I1212 12:19:08.047668    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I1212 12:19:08.047674    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:b0:b0:c6:4d:99 ID:1,9a:b0:b0:c6:4d:99 Lease:0x657a119b}
	I1212 12:19:08.047683    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a1162}
	I1212 12:19:08.047694    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:19:08.047701    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:19:08.047706    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:19:08.047713    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:19:08.047718    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:19:08.047737    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:19:08.047744    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:19:08.047750    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:19:08.047755    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:19:08.047767    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:19:08.047781    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:19:09.814716    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1212 12:19:09.814739    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1212 12:19:09.814752    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | 2023/12/12 12:19:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1212 12:19:10.048233    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Attempt 3
	I1212 12:19:10.048244    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:10.048302    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | hyperkit pid from json: 6724
	I1212 12:19:10.049158    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Searching for ae:69:0:1f:74:78 in /var/db/dhcpd_leases ...
	I1212 12:19:10.049202    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I1212 12:19:10.049209    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:b0:b0:c6:4d:99 ID:1,9a:b0:b0:c6:4d:99 Lease:0x657a119b}
	I1212 12:19:10.049225    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a1162}
	I1212 12:19:10.049232    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:19:10.049238    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:19:10.049245    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:19:10.049251    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:19:10.049256    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:19:10.049281    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:19:10.049297    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:19:10.049308    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:19:10.049319    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:19:10.049325    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:19:10.049332    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:19:12.050442    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Attempt 4
	I1212 12:19:12.050479    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:12.050497    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | hyperkit pid from json: 6724
	I1212 12:19:12.051465    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Searching for ae:69:0:1f:74:78 in /var/db/dhcpd_leases ...
	I1212 12:19:12.051512    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I1212 12:19:12.051521    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:b0:b0:c6:4d:99 ID:1,9a:b0:b0:c6:4d:99 Lease:0x657a119b}
	I1212 12:19:12.051530    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a1162}
	I1212 12:19:12.051539    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:19:12.051551    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:19:12.051556    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:19:12.051562    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:19:12.051570    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:19:12.051580    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:19:12.051588    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:19:12.051595    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:19:12.051602    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:19:12.051610    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:19:12.051617    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:19:14.053124    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Attempt 5
	I1212 12:19:14.053134    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:14.053240    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | hyperkit pid from json: 6724
	I1212 12:19:14.054282    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Searching for ae:69:0:1f:74:78 in /var/db/dhcpd_leases ...
	I1212 12:19:14.054379    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I1212 12:19:14.054393    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ae:69:0:1f:74:78 ID:1,ae:69:0:1f:74:78 Lease:0x657a11c1}
	I1212 12:19:14.054403    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Found match: ae:69:0:1f:74:78
	I1212 12:19:14.054407    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | IP: 192.169.0.15
	I1212 12:19:14.054472    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetConfigRaw
	I1212 12:19:14.055135    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:14.055305    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:14.055408    6715 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 12:19:14.055414    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetState
	I1212 12:19:14.055525    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:14.055585    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | hyperkit pid from json: 6724
	I1212 12:19:14.056622    6715 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 12:19:14.056633    6715 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 12:19:14.056637    6715 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 12:19:14.056643    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:14.056759    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:14.056859    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:14.056958    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:14.057098    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:14.057234    6715 main.go:141] libmachine: Using SSH client type: native
	I1212 12:19:14.057596    6715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 12:19:14.057600    6715 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 12:19:15.124764    6715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 12:19:15.124772    6715 main.go:141] libmachine: Detecting the provisioner...
	I1212 12:19:15.124777    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:15.124905    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:15.124997    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.125078    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.125179    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:15.125308    6715 main.go:141] libmachine: Using SSH client type: native
	I1212 12:19:15.125559    6715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 12:19:15.125564    6715 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 12:19:15.192179    6715 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 12:19:15.192235    6715 main.go:141] libmachine: found compatible host: buildroot
	I1212 12:19:15.192240    6715 main.go:141] libmachine: Provisioning with buildroot...
	I1212 12:19:15.192244    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetMachineName
	I1212 12:19:15.192380    6715 buildroot.go:166] provisioning hostname "multinode-675000-m02"
	I1212 12:19:15.192388    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetMachineName
	I1212 12:19:15.192480    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:15.192563    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:15.192643    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.192727    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.192818    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:15.192943    6715 main.go:141] libmachine: Using SSH client type: native
	I1212 12:19:15.193182    6715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 12:19:15.193188    6715 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-675000-m02 && echo "multinode-675000-m02" | sudo tee /etc/hostname
	I1212 12:19:15.268074    6715 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-675000-m02
	
	I1212 12:19:15.268088    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:15.268227    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:15.268343    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.268427    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.268521    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:15.268650    6715 main.go:141] libmachine: Using SSH client type: native
	I1212 12:19:15.268898    6715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 12:19:15.268907    6715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-675000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-675000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-675000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 12:19:15.342038    6715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 12:19:15.342052    6715 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17734-1975/.minikube CaCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17734-1975/.minikube}
	I1212 12:19:15.342064    6715 buildroot.go:174] setting up certificates
	I1212 12:19:15.342074    6715 provision.go:83] configureAuth start
	I1212 12:19:15.342080    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetMachineName
	I1212 12:19:15.342223    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetIP
	I1212 12:19:15.342359    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:15.342449    6715 provision.go:138] copyHostCerts
	I1212 12:19:15.342521    6715 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem, removing ...
	I1212 12:19:15.342530    6715 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem
	I1212 12:19:15.342627    6715 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem (1675 bytes)
	I1212 12:19:15.342834    6715 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem, removing ...
	I1212 12:19:15.342837    6715 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem
	I1212 12:19:15.342903    6715 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem (1078 bytes)
	I1212 12:19:15.343061    6715 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem, removing ...
	I1212 12:19:15.343064    6715 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem
	I1212 12:19:15.343123    6715 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem (1123 bytes)
	I1212 12:19:15.343266    6715 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem org=jenkins.multinode-675000-m02 san=[192.169.0.15 192.169.0.15 localhost 127.0.0.1 minikube multinode-675000-m02]
	I1212 12:19:15.393511    6715 provision.go:172] copyRemoteCerts
	I1212 12:19:15.393570    6715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 12:19:15.393589    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:15.393738    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:15.393836    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.393936    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:15.394031    6715 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/id_rsa Username:docker}
	I1212 12:19:15.432714    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 12:19:15.449047    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1212 12:19:15.465129    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 12:19:15.481471    6715 provision.go:86] duration metric: configureAuth took 139.385962ms
	I1212 12:19:15.481481    6715 buildroot.go:189] setting minikube options for container-runtime
	I1212 12:19:15.481607    6715 config.go:182] Loaded profile config "multinode-675000-m02": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:19:15.481620    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:15.481775    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:15.481942    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:15.482075    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.482192    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.482285    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:15.482397    6715 main.go:141] libmachine: Using SSH client type: native
	I1212 12:19:15.482625    6715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 12:19:15.482629    6715 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 12:19:15.549756    6715 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 12:19:15.549764    6715 buildroot.go:70] root file system type: tmpfs
	I1212 12:19:15.549839    6715 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 12:19:15.549855    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:15.549986    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:15.550081    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.550169    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.550266    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:15.550385    6715 main.go:141] libmachine: Using SSH client type: native
	I1212 12:19:15.550644    6715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 12:19:15.550689    6715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 12:19:15.627095    6715 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 12:19:15.627111    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:15.627308    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:15.627399    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.627490    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:15.627573    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:15.627715    6715 main.go:141] libmachine: Using SSH client type: native
	I1212 12:19:15.627963    6715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 12:19:15.627973    6715 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 12:19:16.133426    6715 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 12:19:16.133436    6715 main.go:141] libmachine: Checking connection to Docker...
	I1212 12:19:16.133441    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetURL
	I1212 12:19:16.133585    6715 main.go:141] libmachine: Docker is up and running!
	I1212 12:19:16.133591    6715 main.go:141] libmachine: Reticulating splines...
	I1212 12:19:16.133598    6715 client.go:171] LocalClient.Create took 12.864832327s
	I1212 12:19:16.133608    6715 start.go:167] duration metric: libmachine.API.Create for "multinode-675000-m02" took 12.864871793s
	I1212 12:19:16.133620    6715 start.go:300] post-start starting for "multinode-675000-m02" (driver="hyperkit")
	I1212 12:19:16.133629    6715 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 12:19:16.133640    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:16.133792    6715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 12:19:16.133804    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:16.133897    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:16.133967    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:16.134042    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:16.134103    6715 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/id_rsa Username:docker}
	I1212 12:19:16.176143    6715 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 12:19:16.179444    6715 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 12:19:16.179463    6715 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/addons for local assets ...
	I1212 12:19:16.179564    6715 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/files for local assets ...
	I1212 12:19:16.179738    6715 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> 31982.pem in /etc/ssl/certs
	I1212 12:19:16.179892    6715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 12:19:16.187191    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem --> /etc/ssl/certs/31982.pem (1708 bytes)
	I1212 12:19:16.211643    6715 start.go:303] post-start completed in 78.016471ms
	I1212 12:19:16.211667    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetConfigRaw
	I1212 12:19:16.212317    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetIP
	I1212 12:19:16.212461    6715 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/config.json ...
	I1212 12:19:16.212783    6715 start.go:128] duration metric: createHost completed in 12.97511049s
	I1212 12:19:16.212796    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:16.212896    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:16.212996    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:16.213086    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:16.213162    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:16.213261    6715 main.go:141] libmachine: Using SSH client type: native
	I1212 12:19:16.213501    6715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I1212 12:19:16.213505    6715 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1212 12:19:16.285817    6715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702412355.617127320
	
	I1212 12:19:16.285824    6715 fix.go:206] guest clock: 1702412355.617127320
	I1212 12:19:16.285828    6715 fix.go:219] Guest: 2023-12-12 12:19:15.61712732 -0800 PST Remote: 2023-12-12 12:19:16.212789 -0800 PST m=+13.456291877 (delta=-595.66168ms)
	I1212 12:19:16.285848    6715 fix.go:190] guest clock delta is within tolerance: -595.66168ms
	I1212 12:19:16.285850    6715 start.go:83] releasing machines lock for "multinode-675000-m02", held for 13.048258893s
	I1212 12:19:16.285869    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:16.285998    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetIP
	I1212 12:19:16.286117    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:16.286418    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:16.286532    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:16.286618    6715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 12:19:16.286646    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:16.286709    6715 ssh_runner.go:195] Run: cat /version.json
	I1212 12:19:16.286719    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:16.286754    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:16.286859    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:16.286881    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:16.286991    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:16.287006    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:16.287108    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:16.287123    6715 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/id_rsa Username:docker}
	I1212 12:19:16.287204    6715 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/id_rsa Username:docker}
	I1212 12:19:16.322298    6715 ssh_runner.go:195] Run: systemctl --version
	I1212 12:19:16.327047    6715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 12:19:16.376962    6715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 12:19:16.377027    6715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 12:19:16.388129    6715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 12:19:16.388138    6715 start.go:475] detecting cgroup driver to use...
	I1212 12:19:16.388244    6715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:19:16.400334    6715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 12:19:16.406964    6715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 12:19:16.414016    6715 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 12:19:16.414062    6715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 12:19:16.421426    6715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:19:16.428626    6715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 12:19:16.436126    6715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:19:16.443841    6715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 12:19:16.451194    6715 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 12:19:16.458220    6715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 12:19:16.464738    6715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 12:19:16.471055    6715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:19:16.577918    6715 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 12:19:16.592070    6715 start.go:475] detecting cgroup driver to use...
	I1212 12:19:16.592139    6715 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 12:19:16.605383    6715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:19:16.620094    6715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 12:19:16.634836    6715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:19:16.644917    6715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:19:16.654188    6715 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 12:19:16.678425    6715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:19:16.688803    6715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:19:16.701845    6715 ssh_runner.go:195] Run: which cri-dockerd
	I1212 12:19:16.704362    6715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 12:19:16.710150    6715 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 12:19:16.721504    6715 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 12:19:16.812786    6715 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 12:19:16.911308    6715 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 12:19:16.911378    6715 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 12:19:16.922808    6715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:19:17.013861    6715 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 12:19:18.266804    6715 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.252947416s)
	I1212 12:19:18.266867    6715 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:19:18.356579    6715 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 12:19:18.456620    6715 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:19:18.552133    6715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:19:18.636585    6715 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 12:19:18.651913    6715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:19:18.755288    6715 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1212 12:19:18.811075    6715 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1212 12:19:18.811976    6715 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1212 12:19:18.815969    6715 start.go:543] Will wait 60s for crictl version
	I1212 12:19:18.816021    6715 ssh_runner.go:195] Run: which crictl
	I1212 12:19:18.818825    6715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 12:19:18.853944    6715 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1212 12:19:18.854018    6715 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 12:19:18.872286    6715 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1212 12:19:18.913851    6715 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1212 12:19:18.913875    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetIP
	I1212 12:19:18.914078    6715 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1212 12:19:18.916717    6715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 12:19:18.925647    6715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 12:19:18.925704    6715 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 12:19:18.938479    6715 docker.go:671] Got preloaded images: 
	I1212 12:19:18.938486    6715 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1212 12:19:18.938549    6715 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 12:19:18.944925    6715 ssh_runner.go:195] Run: which lz4
	I1212 12:19:18.947440    6715 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1212 12:19:18.950012    6715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 12:19:18.950027    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I1212 12:19:20.190792    6715 docker.go:635] Took 1.243410 seconds to copy over tarball
	I1212 12:19:20.190849    6715 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 12:19:24.217092    6715 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.026290045s)
	I1212 12:19:24.217105    6715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 12:19:24.244288    6715 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1212 12:19:24.250568    6715 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1212 12:19:24.262067    6715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:19:24.344950    6715 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 12:19:26.218957    6715 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.874020417s)
	I1212 12:19:26.219040    6715 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1212 12:19:26.232891    6715 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1212 12:19:26.232905    6715 cache_images.go:84] Images are preloaded, skipping loading
	I1212 12:19:26.233011    6715 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1212 12:19:26.252585    6715 cni.go:84] Creating CNI manager for ""
	I1212 12:19:26.252595    6715 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 12:19:26.252605    6715 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1212 12:19:26.252620    6715 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.15 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-675000-m02 NodeName:multinode-675000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 12:19:26.252727    6715 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-675000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 12:19:26.252791    6715 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-675000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-675000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1212 12:19:26.252846    6715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1212 12:19:26.259028    6715 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 12:19:26.259078    6715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 12:19:26.264881    6715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1212 12:19:26.276295    6715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 12:19:26.288374    6715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1212 12:19:26.300235    6715 ssh_runner.go:195] Run: grep 192.169.0.15	control-plane.minikube.internal$ /etc/hosts
	I1212 12:19:26.302759    6715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 12:19:26.311630    6715 certs.go:56] Setting up /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02 for IP: 192.169.0.15
	I1212 12:19:26.311645    6715 certs.go:190] acquiring lock for shared ca certs: {Name:mk3a28fc3e7d169ec96b49a3f31bfa6edcaf7ac0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:19:26.311788    6715 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.key
	I1212 12:19:26.311841    6715 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.key
	I1212 12:19:26.311887    6715 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/client.key
	I1212 12:19:26.311897    6715 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/client.crt with IP's: []
	I1212 12:19:26.417025    6715 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/client.crt ...
	I1212 12:19:26.417033    6715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/client.crt: {Name:mkff88a83fadd8db99b2a20051c5066fffbff0be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:19:26.417467    6715 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/client.key ...
	I1212 12:19:26.417474    6715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/client.key: {Name:mk167eb08dc8d58c254f0ce7a657f182e42e911c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:19:26.417692    6715 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.key.66702ba3
	I1212 12:19:26.417704    6715 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.crt.66702ba3 with IP's: [192.169.0.15 10.96.0.1 127.0.0.1 10.0.0.1]
	I1212 12:19:26.519706    6715 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.crt.66702ba3 ...
	I1212 12:19:26.519715    6715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.crt.66702ba3: {Name:mk46066241aee2d59b0eca6ef671dabcc8aa22e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:19:26.520023    6715 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.key.66702ba3 ...
	I1212 12:19:26.520029    6715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.key.66702ba3: {Name:mk3e78d9318d6a2f83647c7d02fd82e412f17b26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:19:26.520220    6715 certs.go:337] copying /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.crt.66702ba3 -> /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.crt
	I1212 12:19:26.520430    6715 certs.go:341] copying /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.key.66702ba3 -> /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.key
	I1212 12:19:26.520607    6715 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/proxy-client.key
	I1212 12:19:26.520618    6715 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/proxy-client.crt with IP's: []
	I1212 12:19:26.581065    6715 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/proxy-client.crt ...
	I1212 12:19:26.581074    6715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/proxy-client.crt: {Name:mk2d53dac59944301863e71ef17ebf5b02f36a63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:19:26.581419    6715 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/proxy-client.key ...
	I1212 12:19:26.581426    6715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/proxy-client.key: {Name:mk55515fff3c2df4f903a266e6840f45df6d6f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:19:26.581848    6715 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198.pem (1338 bytes)
	W1212 12:19:26.581897    6715 certs.go:433] ignoring /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198_empty.pem, impossibly tiny 0 bytes
	I1212 12:19:26.581911    6715 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 12:19:26.581946    6715 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem (1078 bytes)
	I1212 12:19:26.581980    6715 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem (1123 bytes)
	I1212 12:19:26.582012    6715 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem (1675 bytes)
	I1212 12:19:26.582087    6715 certs.go:437] found cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem (1708 bytes)
	I1212 12:19:26.582674    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1212 12:19:26.600028    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 12:19:26.617146    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 12:19:26.634318    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m02/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 12:19:26.651552    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 12:19:26.668098    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 12:19:26.685139    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 12:19:26.702443    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 12:19:26.719776    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 12:19:26.736962    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/3198.pem --> /usr/share/ca-certificates/3198.pem (1338 bytes)
	I1212 12:19:26.753438    6715 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem --> /usr/share/ca-certificates/31982.pem (1708 bytes)
	I1212 12:19:26.770060    6715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 12:19:26.781725    6715 ssh_runner.go:195] Run: openssl version
	I1212 12:19:26.785550    6715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 12:19:26.792475    6715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:19:26.795502    6715 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:58 /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:19:26.795536    6715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 12:19:26.799190    6715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 12:19:26.805747    6715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3198.pem && ln -fs /usr/share/ca-certificates/3198.pem /etc/ssl/certs/3198.pem"
	I1212 12:19:26.812427    6715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3198.pem
	I1212 12:19:26.815495    6715 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:03 /usr/share/ca-certificates/3198.pem
	I1212 12:19:26.815531    6715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3198.pem
	I1212 12:19:26.819251    6715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3198.pem /etc/ssl/certs/51391683.0"
	I1212 12:19:26.826086    6715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/31982.pem && ln -fs /usr/share/ca-certificates/31982.pem /etc/ssl/certs/31982.pem"
	I1212 12:19:26.833426    6715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/31982.pem
	I1212 12:19:26.836562    6715 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:03 /usr/share/ca-certificates/31982.pem
	I1212 12:19:26.836606    6715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/31982.pem
	I1212 12:19:26.840319    6715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/31982.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 12:19:26.847060    6715 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1212 12:19:26.849913    6715 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1212 12:19:26.849958    6715 kubeadm.go:404] StartCluster: {Name:multinode-675000-m02 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:multinode-675000-m02 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:19:26.850051    6715 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1212 12:19:26.862763    6715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 12:19:26.868998    6715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 12:19:26.875510    6715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 12:19:26.882961    6715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 12:19:26.882997    6715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 12:19:26.923855    6715 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1212 12:19:26.923912    6715 kubeadm.go:322] [preflight] Running pre-flight checks
	I1212 12:19:27.028591    6715 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 12:19:27.028677    6715 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 12:19:27.028747    6715 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 12:19:27.254625    6715 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 12:19:27.298730    6715 out.go:204]   - Generating certificates and keys ...
	I1212 12:19:27.298817    6715 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1212 12:19:27.298902    6715 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1212 12:19:27.604006    6715 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 12:19:27.814609    6715 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1212 12:19:28.196177    6715 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1212 12:19:28.263326    6715 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1212 12:19:28.348972    6715 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1212 12:19:28.349168    6715 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-675000-m02] and IPs [192.169.0.15 127.0.0.1 ::1]
	I1212 12:19:28.593353    6715 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1212 12:19:28.593641    6715 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-675000-m02] and IPs [192.169.0.15 127.0.0.1 ::1]
	I1212 12:19:28.786828    6715 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 12:19:28.866009    6715 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 12:19:28.929151    6715 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1212 12:19:28.929443    6715 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 12:19:29.228643    6715 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 12:19:29.448651    6715 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 12:19:29.580278    6715 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 12:19:30.326894    6715 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 12:19:30.327353    6715 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 12:19:30.329112    6715 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 12:19:30.353089    6715 out.go:204]   - Booting up control plane ...
	I1212 12:19:30.353176    6715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 12:19:30.353248    6715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 12:19:30.353307    6715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 12:19:30.353384    6715 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 12:19:30.353464    6715 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 12:19:30.353496    6715 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1212 12:19:30.440669    6715 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 12:19:35.938958    6715 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502486 seconds
	I1212 12:19:35.939043    6715 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 12:19:35.951845    6715 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 12:19:36.491018    6715 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 12:19:36.491175    6715 kubeadm.go:322] [mark-control-plane] Marking the node multinode-675000-m02 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 12:19:37.007680    6715 kubeadm.go:322] [bootstrap-token] Using token: frq5tb.sdjnq2fb68k6g8i9
	I1212 12:19:37.047806    6715 out.go:204]   - Configuring RBAC rules ...
	I1212 12:19:37.047995    6715 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 12:19:37.051192    6715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 12:19:37.092416    6715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 12:19:37.094568    6715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 12:19:37.097008    6715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 12:19:37.099786    6715 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 12:19:37.109713    6715 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 12:19:37.286952    6715 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1212 12:19:37.454774    6715 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1212 12:19:37.455474    6715 kubeadm.go:322] 
	I1212 12:19:37.455525    6715 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1212 12:19:37.455528    6715 kubeadm.go:322] 
	I1212 12:19:37.455589    6715 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1212 12:19:37.455592    6715 kubeadm.go:322] 
	I1212 12:19:37.455610    6715 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1212 12:19:37.455654    6715 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 12:19:37.455699    6715 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 12:19:37.455705    6715 kubeadm.go:322] 
	I1212 12:19:37.455753    6715 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1212 12:19:37.455757    6715 kubeadm.go:322] 
	I1212 12:19:37.455798    6715 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 12:19:37.455806    6715 kubeadm.go:322] 
	I1212 12:19:37.455848    6715 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1212 12:19:37.455911    6715 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 12:19:37.455964    6715 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 12:19:37.455967    6715 kubeadm.go:322] 
	I1212 12:19:37.456032    6715 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 12:19:37.456092    6715 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1212 12:19:37.456098    6715 kubeadm.go:322] 
	I1212 12:19:37.456159    6715 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token frq5tb.sdjnq2fb68k6g8i9 \
	I1212 12:19:37.456238    6715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f99f9657aff247a8042444d6497aa99debec968500b23dc54ae1da873e195109 \
	I1212 12:19:37.456256    6715 kubeadm.go:322] 	--control-plane 
	I1212 12:19:37.456258    6715 kubeadm.go:322] 
	I1212 12:19:37.456330    6715 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1212 12:19:37.456337    6715 kubeadm.go:322] 
	I1212 12:19:37.456396    6715 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token frq5tb.sdjnq2fb68k6g8i9 \
	I1212 12:19:37.456478    6715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f99f9657aff247a8042444d6497aa99debec968500b23dc54ae1da873e195109 
	I1212 12:19:37.456727    6715 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 12:19:37.456739    6715 cni.go:84] Creating CNI manager for ""
	I1212 12:19:37.456756    6715 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 12:19:37.494381    6715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 12:19:37.552819    6715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 12:19:37.572048    6715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1212 12:19:37.594442    6715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 12:19:37.594538    6715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:19:37.594552    6715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1 minikube.k8s.io/name=multinode-675000-m02 minikube.k8s.io/updated_at=2023_12_12T12_19_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 12:19:37.623984    6715 ops.go:34] apiserver oom_adj: -16
	I1212 12:19:37.704526    6715 kubeadm.go:1088] duration metric: took 110.038486ms to wait for elevateKubeSystemPrivileges.
	I1212 12:19:37.704564    6715 kubeadm.go:406] StartCluster complete in 10.854771606s
	I1212 12:19:37.704578    6715 settings.go:142] acquiring lock: {Name:mk437dff6ee4f62ea2311e5ad7dccf890596936f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:19:37.704651    6715 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:19:37.705609    6715 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/kubeconfig: {Name:mk6d5ef4e0f8c6a055bbd7ff4a33097a831e2d15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:19:37.705862    6715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 12:19:37.705895    6715 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1212 12:19:37.705934    6715 addons.go:69] Setting storage-provisioner=true in profile "multinode-675000-m02"
	I1212 12:19:37.705936    6715 addons.go:69] Setting default-storageclass=true in profile "multinode-675000-m02"
	I1212 12:19:37.705948    6715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-675000-m02"
	I1212 12:19:37.705956    6715 addons.go:231] Setting addon storage-provisioner=true in "multinode-675000-m02"
	I1212 12:19:37.705992    6715 host.go:66] Checking if "multinode-675000-m02" exists ...
	I1212 12:19:37.706051    6715 config.go:182] Loaded profile config "multinode-675000-m02": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:19:37.706223    6715 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:19:37.706240    6715 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:19:37.706243    6715 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:19:37.706254    6715 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:19:37.714811    6715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51553
	I1212 12:19:37.715186    6715 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:19:37.715459    6715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51555
	I1212 12:19:37.715509    6715 main.go:141] libmachine: Using API Version  1
	I1212 12:19:37.715515    6715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:19:37.715718    6715 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:19:37.715775    6715 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:19:37.716085    6715 main.go:141] libmachine: Using API Version  1
	I1212 12:19:37.716093    6715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:19:37.716142    6715 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:19:37.716158    6715 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:19:37.716301    6715 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:19:37.716898    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetState
	I1212 12:19:37.717172    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:37.717343    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | hyperkit pid from json: 6724
	I1212 12:19:37.719293    6715 addons.go:231] Setting addon default-storageclass=true in "multinode-675000-m02"
	I1212 12:19:37.719313    6715 host.go:66] Checking if "multinode-675000-m02" exists ...
	I1212 12:19:37.719565    6715 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:19:37.719587    6715 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:19:37.725616    6715 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-675000-m02" context rescaled to 1 replicas
	I1212 12:19:37.725656    6715 start.go:223] Will wait 6m0s for node &{Name: IP:192.169.0.15 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 12:19:37.748299    6715 out.go:177] * Verifying Kubernetes components...
	I1212 12:19:37.725690    6715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51557
	I1212 12:19:37.728761    6715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51558
	I1212 12:19:37.748707    6715 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:19:37.768861    6715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 12:19:37.769235    6715 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:19:37.769286    6715 main.go:141] libmachine: Using API Version  1
	I1212 12:19:37.769301    6715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:19:37.769627    6715 main.go:141] libmachine: Using API Version  1
	I1212 12:19:37.769635    6715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:19:37.769653    6715 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:19:37.769793    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetState
	I1212 12:19:37.769872    6715 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:19:37.769903    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:37.769975    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | hyperkit pid from json: 6724
	I1212 12:19:37.770245    6715 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:19:37.770267    6715 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:19:37.772073    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:37.792884    6715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 12:19:37.779634    6715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51561
	I1212 12:19:37.793695    6715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 12:19:37.798371    6715 api_server.go:52] waiting for apiserver process to appear ...
	I1212 12:19:37.830146    6715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 12:19:37.830152    6715 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 12:19:37.850855    6715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 12:19:37.830619    6715 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:19:37.850872    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:37.851053    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:37.851163    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:37.851257    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:37.851343    6715 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/id_rsa Username:docker}
	I1212 12:19:37.851356    6715 main.go:141] libmachine: Using API Version  1
	I1212 12:19:37.851369    6715 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:19:37.851642    6715 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:19:37.851775    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetState
	I1212 12:19:37.851920    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:19:37.851945    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | hyperkit pid from json: 6724
	I1212 12:19:37.853150    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .DriverName
	I1212 12:19:37.853334    6715 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 12:19:37.853339    6715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 12:19:37.853347    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHHostname
	I1212 12:19:37.853449    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHPort
	I1212 12:19:37.853557    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHKeyPath
	I1212 12:19:37.853652    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .GetSSHUsername
	I1212 12:19:37.853743    6715 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/multinode-675000-m02/id_rsa Username:docker}
	I1212 12:19:37.943922    6715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 12:19:37.947229    6715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 12:19:38.681256    6715 start.go:929] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1212 12:19:38.681341    6715 api_server.go:72] duration metric: took 955.676526ms to wait for apiserver process to appear ...
	I1212 12:19:38.681349    6715 api_server.go:88] waiting for apiserver healthz status ...
	I1212 12:19:38.681366    6715 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I1212 12:19:38.686941    6715 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I1212 12:19:38.687815    6715 api_server.go:141] control plane version: v1.28.4
	I1212 12:19:38.687824    6715 api_server.go:131] duration metric: took 6.47192ms to wait for apiserver health ...
	I1212 12:19:38.687833    6715 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 12:19:38.691798    6715 system_pods.go:59] 4 kube-system pods found
	I1212 12:19:38.691814    6715 system_pods.go:61] "etcd-multinode-675000-m02" [c510974c-98ca-42c6-9a6a-e110615fe85f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 12:19:38.691822    6715 system_pods.go:61] "kube-apiserver-multinode-675000-m02" [523be2ec-14a5-4677-8883-6f72a85280a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 12:19:38.691826    6715 system_pods.go:61] "kube-controller-manager-multinode-675000-m02" [bc4c602a-09b0-4db0-ae39-e77563f5e5d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 12:19:38.691834    6715 system_pods.go:61] "kube-scheduler-multinode-675000-m02" [c38bd859-9491-46d3-a797-7344c0a51d09] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 12:19:38.691837    6715 system_pods.go:74] duration metric: took 4.000833ms to wait for pod list to return data ...
	I1212 12:19:38.691844    6715 kubeadm.go:581] duration metric: took 966.183262ms to wait for : map[apiserver:true system_pods:true] ...
	I1212 12:19:38.691852    6715 node_conditions.go:102] verifying NodePressure condition ...
	I1212 12:19:38.693688    6715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1212 12:19:38.693699    6715 node_conditions.go:123] node cpu capacity is 2
	I1212 12:19:38.693708    6715 node_conditions.go:105] duration metric: took 1.853548ms to run NodePressure ...
	I1212 12:19:38.693714    6715 start.go:228] waiting for startup goroutines ...
	I1212 12:19:38.812512    6715 main.go:141] libmachine: Making call to close driver server
	I1212 12:19:38.812522    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .Close
	I1212 12:19:38.812718    6715 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:19:38.812724    6715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:19:38.812729    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Closing plugin on server side
	I1212 12:19:38.812734    6715 main.go:141] libmachine: Making call to close driver server
	I1212 12:19:38.812738    6715 main.go:141] libmachine: Making call to close driver server
	I1212 12:19:38.812741    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .Close
	I1212 12:19:38.812743    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .Close
	I1212 12:19:38.812876    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Closing plugin on server side
	I1212 12:19:38.812881    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Closing plugin on server side
	I1212 12:19:38.812913    6715 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:19:38.812923    6715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:19:38.812922    6715 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:19:38.812933    6715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:19:38.812942    6715 main.go:141] libmachine: Making call to close driver server
	I1212 12:19:38.812947    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .Close
	I1212 12:19:38.813085    6715 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:19:38.813089    6715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:19:38.813110    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Closing plugin on server side
	I1212 12:19:38.817785    6715 main.go:141] libmachine: Making call to close driver server
	I1212 12:19:38.817791    6715 main.go:141] libmachine: (multinode-675000-m02) Calling .Close
	I1212 12:19:38.817919    6715 main.go:141] libmachine: Successfully made call to close driver server
	I1212 12:19:38.817924    6715 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 12:19:38.817937    6715 main.go:141] libmachine: (multinode-675000-m02) DBG | Closing plugin on server side
	I1212 12:19:38.839755    6715 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 12:19:38.881594    6715 addons.go:502] enable addons completed in 1.175723531s: enabled=[storage-provisioner default-storageclass]
	I1212 12:19:38.881617    6715 start.go:233] waiting for cluster config update ...
	I1212 12:19:38.881630    6715 start.go:242] writing updated cluster config ...
	I1212 12:19:38.882130    6715 ssh_runner.go:195] Run: rm -f paused
	I1212 12:19:38.923631    6715 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
	I1212 12:19:38.945770    6715 out.go:177] * Done! kubectl is now configured to use "multinode-675000-m02" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2023-12-12 20:17:37 UTC, ends at Tue 2023-12-12 20:19:45 UTC. --
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.829535289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:18:02 multinode-675000 dockerd[821]: time="2023-12-12T20:18:02.829545935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:04 multinode-675000 cri-dockerd[1024]: time="2023-12-12T20:18:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc0b30a73c66dd1b745bdb2bcedf1caf4be4063ee094ccb10af19d2aaed40549/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 20:18:04 multinode-675000 dockerd[821]: time="2023-12-12T20:18:04.802308848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:18:04 multinode-675000 dockerd[821]: time="2023-12-12T20:18:04.802403506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:04 multinode-675000 dockerd[821]: time="2023-12-12T20:18:04.802424033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:18:04 multinode-675000 dockerd[821]: time="2023-12-12T20:18:04.802724069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.489204263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.489265671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.489310914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.489690212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:17 multinode-675000 cri-dockerd[1024]: time="2023-12-12T20:18:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1ac252978e0000c3a34bc770a591308d8d8eb559aafc048526a2323dace4e385/resolv.conf as [nameserver 192.169.0.1]"
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.866327711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.866399495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.866418710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:18:17 multinode-675000 dockerd[821]: time="2023-12-12T20:18:17.866428882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:32 multinode-675000 dockerd[815]: time="2023-12-12T20:18:32.968164237Z" level=info msg="ignoring event" container=d9e94810ceb68c92475605934177c0921a7e971486c2635d4dce6119c6418eba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 12 20:18:32 multinode-675000 dockerd[821]: time="2023-12-12T20:18:32.968142217Z" level=info msg="shim disconnected" id=d9e94810ceb68c92475605934177c0921a7e971486c2635d4dce6119c6418eba namespace=moby
	Dec 12 20:18:32 multinode-675000 dockerd[821]: time="2023-12-12T20:18:32.968830287Z" level=warning msg="cleaning up after shim disconnected" id=d9e94810ceb68c92475605934177c0921a7e971486c2635d4dce6119c6418eba namespace=moby
	Dec 12 20:18:32 multinode-675000 dockerd[821]: time="2023-12-12T20:18:32.968873868Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 12 20:18:32 multinode-675000 dockerd[821]: time="2023-12-12T20:18:32.980467972Z" level=warning msg="cleanup warnings time=\"2023-12-12T20:18:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Dec 12 20:18:44 multinode-675000 dockerd[821]: time="2023-12-12T20:18:44.851757359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 12 20:18:44 multinode-675000 dockerd[821]: time="2023-12-12T20:18:44.852447756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 12 20:18:44 multinode-675000 dockerd[821]: time="2023-12-12T20:18:44.852462154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 12 20:18:44 multinode-675000 dockerd[821]: time="2023-12-12T20:18:44.852469643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	605c4a5a44d8d       6e38f40d628db                                                                              About a minute ago   Running             storage-provisioner       2                   075926302b8a7       storage-provisioner
	40abbd6ba5851       ead0a4a53df89                                                                              About a minute ago   Running             coredns                   1                   1ac252978e000       coredns-5dd5756b68-2qgqq
	9420be4a7a64d       c7d1297425461                                                                              About a minute ago   Running             kindnet-cni               1                   bc0b30a73c66d       kindnet-4vq6m
	d9e94810ceb68       6e38f40d628db                                                                              About a minute ago   Exited              storage-provisioner       1                   075926302b8a7       storage-provisioner
	2c9cb416955ce       83f6cc407eed8                                                                              About a minute ago   Running             kube-proxy                1                   92f29a96ae414       kube-proxy-q4dfx
	13a33f6b88010       73deb9a3f7025                                                                              About a minute ago   Running             etcd                      1                   c76d4e0618a55       etcd-multinode-675000
	fb02933e38d84       7fe0e6f37db33                                                                              About a minute ago   Running             kube-apiserver            1                   660eb1a0b7c78       kube-apiserver-multinode-675000
	10a8d5eab4494       d058aa5ab969c                                                                              About a minute ago   Running             kube-controller-manager   1                   346cfc6369ea4       kube-controller-manager-multinode-675000
	6e2edde92c79a       e3db313c6dbc0                                                                              About a minute ago   Running             kube-scheduler            1                   75aadb61316a2       kube-scheduler-multinode-675000
	5139a190a0a70       ead0a4a53df89                                                                              2 minutes ago        Exited              coredns                   0                   906956fbad371       coredns-5dd5756b68-2qgqq
	a391a1302e24d       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   2 minutes ago        Exited              kindnet-cni               0                   c6f5291d5248b       kindnet-4vq6m
	5c4ec41a543b9       83f6cc407eed8                                                                              2 minutes ago        Exited              kube-proxy                0                   c4d605b91fefd       kube-proxy-q4dfx
	ec1ccfe051cf8       e3db313c6dbc0                                                                              3 minutes ago        Exited              kube-scheduler            0                   ec16ed8743035       kube-scheduler-multinode-675000
	0dfb53ca11626       73deb9a3f7025                                                                              3 minutes ago        Exited              etcd                      0                   759eb904c17af       etcd-multinode-675000
	2e3863acd67e9       d058aa5ab969c                                                                              3 minutes ago        Exited              kube-controller-manager   0                   5365eadc60c2d       kube-controller-manager-multinode-675000
	6a5980fcc6dc9       7fe0e6f37db33                                                                              3 minutes ago        Exited              kube-apiserver            0                   32f46c3efb2c7       kube-apiserver-multinode-675000
	
	
	==> coredns [40abbd6ba585] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46491 - 57509 "HINFO IN 5857152611344469365.3469466211013647927. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014178077s
	
	
	==> coredns [5139a190a0a7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39661 - 33289 "HINFO IN 696511843846326458.4911786665791153147. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.013651973s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-675000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-675000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bbafb8443bb801a11d242513c0872b48bb9d80e1
	                    minikube.k8s.io/name=multinode-675000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_12T12_16_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Dec 2023 20:16:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-675000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Dec 2023 20:19:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Dec 2023 20:18:09 +0000   Tue, 12 Dec 2023 20:16:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Dec 2023 20:18:09 +0000   Tue, 12 Dec 2023 20:16:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Dec 2023 20:18:09 +0000   Tue, 12 Dec 2023 20:16:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Dec 2023 20:18:09 +0000   Tue, 12 Dec 2023 20:18:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-675000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2166052Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7d644f703be46d69b715610990df26b
	  System UUID:                fbe411ee-0000-0000-b1fb-f01898ef957c
	  Boot ID:                    ce0e12ea-fd35-4f70-958b-a5f29488f39c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-2qgqq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m41s
	  kube-system                 etcd-multinode-675000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m55s
	  kube-system                 kindnet-4vq6m                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m42s
	  kube-system                 kube-apiserver-multinode-675000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 kube-controller-manager-multinode-675000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 kube-proxy-q4dfx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-scheduler-multinode-675000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m40s                kube-proxy       
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 3m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m1s (x8 over 3m1s)  kubelet          Node multinode-675000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x8 over 3m1s)  kubelet          Node multinode-675000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x7 over 3m1s)  kubelet          Node multinode-675000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m54s                kubelet          Node multinode-675000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m54s                kubelet          Node multinode-675000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m54s                kubelet          Node multinode-675000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m54s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m42s                node-controller  Node multinode-675000 event: Registered Node multinode-675000 in Controller
	  Normal  NodeReady                2m32s                kubelet          Node multinode-675000 status is now: NodeReady
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s (x8 over 109s)  kubelet          Node multinode-675000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x8 over 109s)  kubelet          Node multinode-675000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x7 over 109s)  kubelet          Node multinode-675000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                  node-controller  Node multinode-675000 event: Registered Node multinode-675000 in Controller
	
	
	==> dmesg <==
	[  +0.029205] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +5.116212] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007470] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.956604] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.039372] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.932731] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.063668] systemd-fstab-generator[511]: Ignoring "noauto" for root device
	[  +0.096374] systemd-fstab-generator[522]: Ignoring "noauto" for root device
	[  +0.749987] systemd-fstab-generator[733]: Ignoring "noauto" for root device
	[  +0.237846] systemd-fstab-generator[772]: Ignoring "noauto" for root device
	[  +0.103960] systemd-fstab-generator[783]: Ignoring "noauto" for root device
	[  +0.099137] systemd-fstab-generator[796]: Ignoring "noauto" for root device
	[  +1.234447] kauditd_printk_skb: 30 callbacks suppressed
	[  +0.152778] systemd-fstab-generator[969]: Ignoring "noauto" for root device
	[  +0.084812] systemd-fstab-generator[980]: Ignoring "noauto" for root device
	[  +0.083836] systemd-fstab-generator[991]: Ignoring "noauto" for root device
	[  +0.095398] systemd-fstab-generator[1002]: Ignoring "noauto" for root device
	[  +0.114848] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[ +11.809679] systemd-fstab-generator[1260]: Ignoring "noauto" for root device
	[  +0.269328] kauditd_printk_skb: 29 callbacks suppressed
	[Dec12 20:18] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [0dfb53ca1162] <==
	{"level":"info","ts":"2023-12-12T20:16:46.212171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T20:16:46.212178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2023-12-12T20:16:46.212184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T20:16:46.21448Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.214911Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-675000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T20:16:46.215052Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.215333Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.215433Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:16:46.215068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:16:46.219011Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2023-12-12T20:16:46.215082Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:16:46.2198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T20:16:46.223336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T20:16:46.223372Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-12T20:16:48.80496Z","caller":"traceutil/trace.go:171","msg":"trace[1901849977] transaction","detail":"{read_only:false; response_revision:24; number_of_response:1; }","duration":"101.053217ms","start":"2023-12-12T20:16:48.703894Z","end":"2023-12-12T20:16:48.804947Z","steps":["trace[1901849977] 'process raft request'  (duration: 60.497972ms)","trace[1901849977] 'compare'  (duration: 40.496633ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-12T20:17:21.184189Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-12T20:17:21.184253Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-675000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	{"level":"warn","ts":"2023-12-12T20:17:21.184303Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T20:17:21.184397Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T20:17:21.194803Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-12T20:17:21.194848Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-12T20:17:21.194906Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e0290fa3161c5471","current-leader-member-id":"e0290fa3161c5471"}
	{"level":"info","ts":"2023-12-12T20:17:21.19608Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T20:17:21.196155Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T20:17:21.196163Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-675000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	
	
	==> etcd [13a33f6b8801] <==
	{"level":"info","ts":"2023-12-12T20:17:58.526963Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T20:17:58.526985Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-12T20:17:58.528868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2023-12-12T20:17:58.529024Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2023-12-12T20:17:58.529391Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:17:58.529464Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-12T20:17:58.531567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-12T20:17:58.531655Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T20:17:58.531759Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2023-12-12T20:17:58.532106Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-12T20:17:58.532188Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-12T20:18:00.38618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-12T20:18:00.386265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-12T20:18:00.386345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2023-12-12T20:18:00.386362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 3"}
	{"level":"info","ts":"2023-12-12T20:18:00.38637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2023-12-12T20:18:00.386382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 3"}
	{"level":"info","ts":"2023-12-12T20:18:00.386432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2023-12-12T20:18:00.388101Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:18:00.389449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2023-12-12T20:18:00.388035Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-675000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-12T20:18:00.390226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-12T20:18:00.3911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-12T20:18:00.395073Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-12T20:18:00.395201Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:19:46 up 2 min,  0 users,  load average: 0.27, 0.17, 0.07
	Linux multinode-675000 5.10.57 #1 SMP Fri Dec 8 05:36:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [9420be4a7a64] <==
	I1212 20:18:05.123961       1 main.go:116] setting mtu 1500 for CNI 
	I1212 20:18:05.123996       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 20:18:05.124018       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 20:18:05.426096       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:18:05.426142       1 main.go:227] handling current node
	I1212 20:18:15.436313       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:18:15.436349       1 main.go:227] handling current node
	I1212 20:18:25.442592       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:18:25.442609       1 main.go:227] handling current node
	I1212 20:18:35.454478       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:18:35.454494       1 main.go:227] handling current node
	I1212 20:18:45.463869       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:18:45.463907       1 main.go:227] handling current node
	I1212 20:18:55.466938       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:18:55.466974       1 main.go:227] handling current node
	I1212 20:19:05.474228       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:19:05.474263       1 main.go:227] handling current node
	I1212 20:19:15.486379       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:19:15.486505       1 main.go:227] handling current node
	I1212 20:19:25.490622       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:19:25.490702       1 main.go:227] handling current node
	I1212 20:19:35.493669       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:19:35.493825       1 main.go:227] handling current node
	I1212 20:19:45.504848       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:19:45.504862       1 main.go:227] handling current node
	
	
	==> kindnet [a391a1302e24] <==
	I1212 20:17:10.309204       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1212 20:17:10.309282       1 main.go:107] hostIP = 192.169.0.13
	podIP = 192.169.0.13
	I1212 20:17:10.309416       1 main.go:116] setting mtu 1500 for CNI 
	I1212 20:17:10.309458       1 main.go:146] kindnetd IP family: "ipv4"
	I1212 20:17:10.309478       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1212 20:17:10.609013       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:17:10.609049       1 main.go:227] handling current node
	I1212 20:17:20.613618       1 main.go:223] Handling node with IPs: map[192.169.0.13:{}]
	I1212 20:17:20.613636       1 main.go:227] handling current node
	
	
	==> kube-apiserver [6a5980fcc6dc] <==
	W1212 20:17:22.191719       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191728       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191748       1 logging.go:59] [core] [Channel #13 SubChannel #15] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191760       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191776       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191791       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191803       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191820       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191832       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191850       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191862       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191887       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191891       1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191919       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191921       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191949       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191972       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191979       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192009       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192027       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192037       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192010       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192055       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.191950       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 20:17:22.192079       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fb02933e38d8] <==
	I1212 20:18:01.333189       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1212 20:18:01.363754       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 20:18:01.363910       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 20:18:01.420145       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:18:01.427472       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 20:18:01.428227       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1212 20:18:01.428280       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1212 20:18:01.429042       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1212 20:18:01.429872       1 shared_informer.go:318] Caches are synced for configmaps
	I1212 20:18:01.430019       1 aggregator.go:166] initial CRD sync complete...
	I1212 20:18:01.430163       1 autoregister_controller.go:141] Starting autoregister controller
	I1212 20:18:01.430189       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:18:01.430195       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:18:01.435066       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 20:18:01.442494       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1212 20:18:01.479079       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1212 20:18:02.335074       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1212 20:18:02.559015       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I1212 20:18:02.559823       1 controller.go:624] quota admission added evaluator for: endpoints
	I1212 20:18:02.565779       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:18:03.905205       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1212 20:18:03.996070       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1212 20:18:04.004028       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1212 20:18:04.042733       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:18:04.047213       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [10a8d5eab449] <==
	I1212 20:18:13.743624       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1212 20:18:13.743918       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1212 20:18:13.745184       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1212 20:18:13.746849       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1212 20:18:13.748631       1 shared_informer.go:318] Caches are synced for taint
	I1212 20:18:13.748895       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1212 20:18:13.749079       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-675000"
	I1212 20:18:13.749242       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1212 20:18:13.748911       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1212 20:18:13.749499       1 taint_manager.go:210] "Sending events to api server"
	I1212 20:18:13.749724       1 event.go:307] "Event occurred" object="multinode-675000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-675000 event: Registered Node multinode-675000 in Controller"
	I1212 20:18:13.806060       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1212 20:18:13.831230       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1212 20:18:13.848345       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1212 20:18:13.858199       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:18:13.861528       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1212 20:18:13.865749       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:18:13.913918       1 shared_informer.go:318] Caches are synced for cronjob
	I1212 20:18:13.932222       1 shared_informer.go:318] Caches are synced for job
	I1212 20:18:14.272655       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:18:14.272921       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 20:18:14.294370       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:18:18.952577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.857µs"
	I1212 20:18:18.978539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.676439ms"
	I1212 20:18:18.978756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="176.173µs"
	
	
	==> kube-controller-manager [2e3863acd67e] <==
	I1212 20:17:03.597170       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1212 20:17:03.603638       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4vq6m"
	I1212 20:17:03.603673       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q4dfx"
	I1212 20:17:03.632240       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1212 20:17:03.633170       1 shared_informer.go:318] Caches are synced for endpoint
	I1212 20:17:03.720712       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:17:03.784064       1 shared_informer.go:318] Caches are synced for resource quota
	I1212 20:17:04.112473       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:17:04.180891       1 shared_informer.go:318] Caches are synced for garbage collector
	I1212 20:17:04.180925       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1212 20:17:04.387915       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1212 20:17:04.569116       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1212 20:17:04.628590       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-7ddxh"
	I1212 20:17:04.642387       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2qgqq"
	I1212 20:17:04.666378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="278.873535ms"
	I1212 20:17:04.675935       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-7ddxh"
	I1212 20:17:04.686219       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.811252ms"
	I1212 20:17:04.690363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="4.119126ms"
	I1212 20:17:04.690483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.544µs"
	I1212 20:17:13.476941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.916µs"
	I1212 20:17:13.496255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.815µs"
	I1212 20:17:13.582289       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1212 20:17:15.437619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.389µs"
	I1212 20:17:15.456455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.706842ms"
	I1212 20:17:15.456719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.35µs"
	
	
	==> kube-proxy [2c9cb416955c] <==
	I1212 20:18:02.663704       1 server_others.go:69] "Using iptables proxy"
	I1212 20:18:02.685147       1 node.go:141] Successfully retrieved node IP: 192.169.0.13
	I1212 20:18:02.741359       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 20:18:02.741431       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 20:18:02.743329       1 server_others.go:152] "Using iptables Proxier"
	I1212 20:18:02.744068       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 20:18:02.744393       1 server.go:846] "Version info" version="v1.28.4"
	I1212 20:18:02.744427       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:18:02.746456       1 config.go:188] "Starting service config controller"
	I1212 20:18:02.746796       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 20:18:02.746849       1 config.go:97] "Starting endpoint slice config controller"
	I1212 20:18:02.746854       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 20:18:02.749719       1 config.go:315] "Starting node config controller"
	I1212 20:18:02.749850       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 20:18:02.847975       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 20:18:02.848021       1 shared_informer.go:318] Caches are synced for service config
	I1212 20:18:02.850757       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [5c4ec41a543b] <==
	I1212 20:17:05.147381       1 server_others.go:69] "Using iptables proxy"
	I1212 20:17:05.156379       1 node.go:141] Successfully retrieved node IP: 192.169.0.13
	I1212 20:17:05.209428       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1212 20:17:05.209444       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 20:17:05.221145       1 server_others.go:152] "Using iptables Proxier"
	I1212 20:17:05.221199       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1212 20:17:05.221326       1 server.go:846] "Version info" version="v1.28.4"
	I1212 20:17:05.221358       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:17:05.222088       1 config.go:315] "Starting node config controller"
	I1212 20:17:05.222117       1 config.go:188] "Starting service config controller"
	I1212 20:17:05.222123       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1212 20:17:05.222134       1 config.go:97] "Starting endpoint slice config controller"
	I1212 20:17:05.222136       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1212 20:17:05.224472       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1212 20:17:05.325561       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1212 20:17:05.325623       1 shared_informer.go:318] Caches are synced for service config
	I1212 20:17:05.325803       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6e2edde92c79] <==
	I1212 20:17:58.495264       1 serving.go:348] Generated self-signed cert in-memory
	W1212 20:18:01.372969       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:18:01.373117       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:18:01.373212       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:18:01.373230       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:18:01.426207       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1212 20:18:01.426314       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:18:01.428502       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:18:01.429201       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 20:18:01.430440       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1212 20:18:01.430491       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1212 20:18:01.530029       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ec1ccfe051cf] <==
	E1212 20:16:48.670629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 20:16:48.670761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 20:16:48.670797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 20:16:48.670806       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 20:16:48.670812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 20:16:48.670822       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 20:16:48.670827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 20:16:48.671884       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 20:16:48.671978       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:16:49.523847       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 20:16:49.523874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1212 20:16:49.553927       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 20:16:49.553950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1212 20:16:49.572215       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 20:16:49.572242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 20:16:49.611936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 20:16:49.612040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 20:16:49.643230       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 20:16:49.643284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 20:16:49.760509       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 20:16:49.760528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1212 20:16:49.954233       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 20:17:21.142227       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1212 20:17:21.142282       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1212 20:17:21.142390       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	-- Journal begins at Tue 2023-12-12 20:17:37 UTC, ends at Tue 2023-12-12 20:19:47 UTC. --
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: E1212 20:18:01.793823    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:18:01 multinode-675000 kubelet[1266]: E1212 20:18:01.794007    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume podName:6bc47af7-f871-4daa-97ca-23500d80fc1b nodeName:}" failed. No retries permitted until 2023-12-12 20:18:02.293956932 +0000 UTC m=+5.720183043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume") pod "coredns-5dd5756b68-2qgqq" (UID: "6bc47af7-f871-4daa-97ca-23500d80fc1b") : object "kube-system"/"coredns" not registered
	Dec 12 20:18:02 multinode-675000 kubelet[1266]: E1212 20:18:02.296380    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:18:02 multinode-675000 kubelet[1266]: E1212 20:18:02.296423    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume podName:6bc47af7-f871-4daa-97ca-23500d80fc1b nodeName:}" failed. No retries permitted until 2023-12-12 20:18:03.29641335 +0000 UTC m=+6.722639465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume") pod "coredns-5dd5756b68-2qgqq" (UID: "6bc47af7-f871-4daa-97ca-23500d80fc1b") : object "kube-system"/"coredns" not registered
	Dec 12 20:18:02 multinode-675000 kubelet[1266]: I1212 20:18:02.785003    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075926302b8a7db5b17e4029f40a3ea644500efa4bda04436ff86e1d0b6bd7c1"
	Dec 12 20:18:02 multinode-675000 kubelet[1266]: I1212 20:18:02.793611    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f29a96ae4144d6c320eca42d930561bab97cd5b7de520b97fef4e69c5e514b"
	Dec 12 20:18:03 multinode-675000 kubelet[1266]: E1212 20:18:03.302883    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:18:03 multinode-675000 kubelet[1266]: E1212 20:18:03.302977    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume podName:6bc47af7-f871-4daa-97ca-23500d80fc1b nodeName:}" failed. No retries permitted until 2023-12-12 20:18:05.302963013 +0000 UTC m=+8.729189124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume") pod "coredns-5dd5756b68-2qgqq" (UID: "6bc47af7-f871-4daa-97ca-23500d80fc1b") : object "kube-system"/"coredns" not registered
	Dec 12 20:18:04 multinode-675000 kubelet[1266]: E1212 20:18:04.748044    1266 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-2qgqq" podUID="6bc47af7-f871-4daa-97ca-23500d80fc1b"
	Dec 12 20:18:04 multinode-675000 kubelet[1266]: I1212 20:18:04.748877    1266 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc0b30a73c66dd1b745bdb2bcedf1caf4be4063ee094ccb10af19d2aaed40549"
	Dec 12 20:18:05 multinode-675000 kubelet[1266]: E1212 20:18:05.318331    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:18:05 multinode-675000 kubelet[1266]: E1212 20:18:05.318396    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume podName:6bc47af7-f871-4daa-97ca-23500d80fc1b nodeName:}" failed. No retries permitted until 2023-12-12 20:18:09.31838649 +0000 UTC m=+12.744612603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume") pod "coredns-5dd5756b68-2qgqq" (UID: "6bc47af7-f871-4daa-97ca-23500d80fc1b") : object "kube-system"/"coredns" not registered
	Dec 12 20:18:06 multinode-675000 kubelet[1266]: E1212 20:18:06.805617    1266 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-2qgqq" podUID="6bc47af7-f871-4daa-97ca-23500d80fc1b"
	Dec 12 20:18:08 multinode-675000 kubelet[1266]: E1212 20:18:08.803857    1266 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-2qgqq" podUID="6bc47af7-f871-4daa-97ca-23500d80fc1b"
	Dec 12 20:18:09 multinode-675000 kubelet[1266]: E1212 20:18:09.346694    1266 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:18:09 multinode-675000 kubelet[1266]: E1212 20:18:09.346735    1266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume podName:6bc47af7-f871-4daa-97ca-23500d80fc1b nodeName:}" failed. No retries permitted until 2023-12-12 20:18:17.346724802 +0000 UTC m=+20.772950916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6bc47af7-f871-4daa-97ca-23500d80fc1b-config-volume") pod "coredns-5dd5756b68-2qgqq" (UID: "6bc47af7-f871-4daa-97ca-23500d80fc1b") : object "kube-system"/"coredns" not registered
	Dec 12 20:18:09 multinode-675000 kubelet[1266]: I1212 20:18:09.814740    1266 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 12 20:18:33 multinode-675000 kubelet[1266]: I1212 20:18:33.045498    1266 scope.go:117] "RemoveContainer" containerID="0b9a6a315baeed82d97d15bb0c63a5901745b75641b835e1bcfd45cca596b17a"
	Dec 12 20:18:33 multinode-675000 kubelet[1266]: I1212 20:18:33.045729    1266 scope.go:117] "RemoveContainer" containerID="d9e94810ceb68c92475605934177c0921a7e971486c2635d4dce6119c6418eba"
	Dec 12 20:18:33 multinode-675000 kubelet[1266]: E1212 20:18:33.045864    1266 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6f39d754-bc48-49e5-a0e4-fda2cbf521b7)\"" pod="kube-system/storage-provisioner" podUID="6f39d754-bc48-49e5-a0e4-fda2cbf521b7"
	Dec 12 20:18:44 multinode-675000 kubelet[1266]: I1212 20:18:44.804270    1266 scope.go:117] "RemoveContainer" containerID="d9e94810ceb68c92475605934177c0921a7e971486c2635d4dce6119c6418eba"
	Dec 12 20:18:56 multinode-675000 kubelet[1266]: E1212 20:18:56.822978    1266 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 12 20:18:56 multinode-675000 kubelet[1266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 20:18:56 multinode-675000 kubelet[1266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 20:18:56 multinode-675000 kubelet[1266]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [605c4a5a44d8] <==
	I1212 20:18:44.941211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 20:18:44.951062       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 20:18:44.952397       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 20:19:02.348609       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 20:19:02.348861       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-675000_8aea5e97-f0bd-466e-bce6-1e0ac01f1d57!
	I1212 20:19:02.349041       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9e6fbc79-a02a-4f5f-82d7-de5fe00a9d7b", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-675000_8aea5e97-f0bd-466e-bce6-1e0ac01f1d57 became leader
	I1212 20:19:02.449910       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-675000_8aea5e97-f0bd-466e-bce6-1e0ac01f1d57!
	
	
	==> storage-provisioner [d9e94810ceb6] <==
	I1212 20:18:02.952354       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1212 20:18:32.957033       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-675000 -n multinode-675000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-675000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (83.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (15.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : exit status 90 (15.3759437s)

                                                
                                                
-- stdout --
	* [kindnet-183000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node kindnet-183000 in cluster kindnet-183000
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:37:41.865659    8788 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:37:41.866038    8788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:37:41.866045    8788 out.go:309] Setting ErrFile to fd 2...
	I1212 12:37:41.866050    8788 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:37:41.866286    8788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:37:41.868395    8788 out.go:303] Setting JSON to false
	I1212 12:37:41.895276    8788 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4032,"bootTime":1702409429,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 12:37:41.895380    8788 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 12:37:41.951501    8788 out.go:177] * [kindnet-183000] minikube v1.32.0 on Darwin 14.2
	I1212 12:37:42.064872    8788 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 12:37:42.029111    8788 notify.go:220] Checking for updates...
	I1212 12:37:42.124094    8788 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:37:42.168042    8788 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 12:37:42.225802    8788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 12:37:42.283996    8788 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:37:42.325973    8788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 12:37:42.363624    8788 config.go:182] Loaded profile config "flannel-183000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:37:42.363740    8788 config.go:182] Loaded profile config "multinode-675000-m01": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:37:42.363819    8788 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 12:37:42.392826    8788 out.go:177] * Using the hyperkit driver based on user configuration
	I1212 12:37:42.451010    8788 start.go:298] selected driver: hyperkit
	I1212 12:37:42.451027    8788 start.go:902] validating driver "hyperkit" against <nil>
	I1212 12:37:42.451040    8788 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 12:37:42.454652    8788 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:37:42.454773    8788 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17734-1975/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 12:37:42.463190    8788 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 12:37:42.467971    8788 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:37:42.467997    8788 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 12:37:42.468036    8788 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 12:37:42.468282    8788 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 12:37:42.468344    8788 cni.go:84] Creating CNI manager for "kindnet"
	I1212 12:37:42.468356    8788 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 12:37:42.468367    8788 start_flags.go:323] config:
	{Name:kindnet-183000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:37:42.468549    8788 iso.go:125] acquiring lock: {Name:mkd640d41cda61c79a7d2c2e38355d745b556a2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 12:37:42.489868    8788 out.go:177] * Starting control plane node kindnet-183000 in cluster kindnet-183000
	I1212 12:37:42.547991    8788 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 12:37:42.548036    8788 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 12:37:42.548052    8788 cache.go:56] Caching tarball of preloaded images
	I1212 12:37:42.548172    8788 preload.go:174] Found /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1212 12:37:42.548183    8788 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 12:37:42.548262    8788 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kindnet-183000/config.json ...
	I1212 12:37:42.548281    8788 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kindnet-183000/config.json: {Name:mkbb7d5b8bd3762db375cd2473f00e84c5b8b872 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 12:37:42.548538    8788 start.go:365] acquiring machines lock for kindnet-183000: {Name:mkcfb9a2794178bbcff953e64f7f6a3e3b1e9997 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 12:37:42.548587    8788 start.go:369] acquired machines lock for "kindnet-183000" in 39.899µs
	I1212 12:37:42.548612    8788 start.go:93] Provisioning new machine with config: &{Name:kindnet-183000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-183000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1212 12:37:42.548659    8788 start.go:125] createHost starting for "" (driver="hyperkit")
	I1212 12:37:42.569799    8788 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1212 12:37:42.570070    8788 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:37:42.570108    8788 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:37:42.579071    8788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53343
	I1212 12:37:42.579523    8788 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:37:42.579981    8788 main.go:141] libmachine: Using API Version  1
	I1212 12:37:42.579993    8788 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:37:42.580287    8788 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:37:42.580417    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetMachineName
	I1212 12:37:42.580540    8788 main.go:141] libmachine: (kindnet-183000) Calling .DriverName
	I1212 12:37:42.580648    8788 start.go:159] libmachine.API.Create for "kindnet-183000" (driver="hyperkit")
	I1212 12:37:42.580677    8788 client.go:168] LocalClient.Create starting
	I1212 12:37:42.580735    8788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem
	I1212 12:37:42.580792    8788 main.go:141] libmachine: Decoding PEM data...
	I1212 12:37:42.580811    8788 main.go:141] libmachine: Parsing certificate...
	I1212 12:37:42.580879    8788 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem
	I1212 12:37:42.580918    8788 main.go:141] libmachine: Decoding PEM data...
	I1212 12:37:42.580932    8788 main.go:141] libmachine: Parsing certificate...
	I1212 12:37:42.580955    8788 main.go:141] libmachine: Running pre-create checks...
	I1212 12:37:42.580987    8788 main.go:141] libmachine: (kindnet-183000) Calling .PreCreateCheck
	I1212 12:37:42.581088    8788 main.go:141] libmachine: (kindnet-183000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:37:42.581247    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetConfigRaw
	I1212 12:37:42.581751    8788 main.go:141] libmachine: Creating machine...
	I1212 12:37:42.581761    8788 main.go:141] libmachine: (kindnet-183000) Calling .Create
	I1212 12:37:42.581841    8788 main.go:141] libmachine: (kindnet-183000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:37:42.582008    8788 main.go:141] libmachine: (kindnet-183000) DBG | I1212 12:37:42.581832    8796 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:37:42.582098    8788 main.go:141] libmachine: (kindnet-183000) Downloading /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17734-1975/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso...
	I1212 12:37:42.793855    8788 main.go:141] libmachine: (kindnet-183000) DBG | I1212 12:37:42.793791    8796 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/id_rsa...
	I1212 12:37:43.056836    8788 main.go:141] libmachine: (kindnet-183000) DBG | I1212 12:37:43.056769    8796 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/kindnet-183000.rawdisk...
	I1212 12:37:43.056862    8788 main.go:141] libmachine: (kindnet-183000) DBG | Writing magic tar header
	I1212 12:37:43.056879    8788 main.go:141] libmachine: (kindnet-183000) DBG | Writing SSH key tar header
	I1212 12:37:43.057174    8788 main.go:141] libmachine: (kindnet-183000) DBG | I1212 12:37:43.057140    8796 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000 ...
	I1212 12:37:43.394115    8788 main.go:141] libmachine: (kindnet-183000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:37:43.394131    8788 main.go:141] libmachine: (kindnet-183000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/hyperkit.pid
	I1212 12:37:43.394167    8788 main.go:141] libmachine: (kindnet-183000) DBG | Using UUID 4c7a61ac-992e-11ee-90a1-f01898ef957c
	I1212 12:37:43.420067    8788 main.go:141] libmachine: (kindnet-183000) DBG | Generated MAC 1a:7f:b8:14:2a:8f
	I1212 12:37:43.420087    8788 main.go:141] libmachine: (kindnet-183000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kindnet-183000
	I1212 12:37:43.420117    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4c7a61ac-992e-11ee-90a1-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000110330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1212 12:37:43.420146    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4c7a61ac-992e-11ee-90a1-f01898ef957c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000110330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1212 12:37:43.420253    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4c7a61ac-992e-11ee-90a1-f01898ef957c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/kindnet-183000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/tty,log=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/bzimage,/Users/jenkins/minikube-integration/17734-1975/.minikube
/machines/kindnet-183000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kindnet-183000"}
	I1212 12:37:43.420299    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4c7a61ac-992e-11ee-90a1-f01898ef957c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/kindnet-183000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/tty,log=/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/console-ring -f kexec,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/bzimage,/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/initrd,earlyprintk=serial loglevel=3
console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kindnet-183000"
	I1212 12:37:43.420318    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1212 12:37:43.423697    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 DEBUG: hyperkit: Pid is 8797
	I1212 12:37:43.424175    8788 main.go:141] libmachine: (kindnet-183000) DBG | Attempt 0
	I1212 12:37:43.424192    8788 main.go:141] libmachine: (kindnet-183000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:37:43.424336    8788 main.go:141] libmachine: (kindnet-183000) DBG | hyperkit pid from json: 8797
	I1212 12:37:43.425658    8788 main.go:141] libmachine: (kindnet-183000) DBG | Searching for 1a:7f:b8:14:2a:8f in /var/db/dhcpd_leases ...
	I1212 12:37:43.425788    8788 main.go:141] libmachine: (kindnet-183000) DBG | Found 31 entries in /var/db/dhcpd_leases!
	I1212 12:37:43.425809    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:da:e2:d8:da:bf:c3 ID:1,da:e2:d8:da:bf:c3 Lease:0x657a15eb}
	I1212 12:37:43.425864    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:7e:a:9:f8:41:d6 ID:1,7e:a:9:f8:41:d6 Lease:0x657a15cd}
	I1212 12:37:43.425884    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:9a:f9:c3:85:71:d1 ID:1,9a:f9:c3:85:71:d1 Lease:0x6578c460}
	I1212 12:37:43.425948    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:4a:9e:77:f4:f6:17 ID:1,4a:9e:77:f4:f6:17 Lease:0x6578c435}
	I1212 12:37:43.425974    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:f6:71:6b:7f:e6:12 ID:1,f6:71:6b:7f:e6:12 Lease:0x657a1563}
	I1212 12:37:43.425989    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:36:a9:90:e6:5a ID:1,6:36:a9:90:e6:5a Lease:0x657a1551}
	I1212 12:37:43.426000    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:6a:f9:11:23:5f:c0 ID:1,6a:f9:11:23:5f:c0 Lease:0x657a1523}
	I1212 12:37:43.426026    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:1e:b8:3f:a6:ec:2d ID:1,1e:b8:3f:a6:ec:2d Lease:0x657a1438}
	I1212 12:37:43.426041    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:21:4e:2d:af:b4 ID:1,9a:21:4e:2d:af:b4 Lease:0x6578c29e}
	I1212 12:37:43.426061    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:b2:ee:c6:ac:a8:d7 ID:1,b2:ee:c6:ac:a8:d7 Lease:0x657a13ef}
	I1212 12:37:43.426078    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:de:3:d1:7e:56:cf ID:1,de:3:d1:7e:56:cf Lease:0x657a13d4}
	I1212 12:37:43.426101    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:46:a9:2d:43:e4:ab ID:1,46:a9:2d:43:e4:ab Lease:0x6578c264}
	I1212 12:37:43.426119    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:aa:d4:5f:30:b6:d3 ID:1,aa:d4:5f:30:b6:d3 Lease:0x657a13a9}
	I1212 12:37:43.426139    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:4a:4c:49:6:3c:3b ID:1,4a:4c:49:6:3c:3b Lease:0x657a1392}
	I1212 12:37:43.426156    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:16:f1:67:71:c6:1b ID:1,16:f1:67:71:c6:1b Lease:0x657a1326}
	I1212 12:37:43.426170    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:ea:5b:5a:8c:df:db ID:1,ea:5b:5a:8c:df:db Lease:0x657a12b9}
	I1212 12:37:43.426183    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a:be:a7:4:7b:59 ID:1,a:be:a7:4:7b:59 Lease:0x657a1280}
	I1212 12:37:43.426206    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ae:69:0:1f:74:78 ID:1,ae:69:0:1f:74:78 Lease:0x657a11c1}
	I1212 12:37:43.426222    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:b0:b0:c6:4d:99 ID:1,9a:b0:b0:c6:4d:99 Lease:0x657a119b}
	I1212 12:37:43.426234    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a1162}
	I1212 12:37:43.426262    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:37:43.426285    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:37:43.426305    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:37:43.426327    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:37:43.426347    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:37:43.426372    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:37:43.426384    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:37:43.426393    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:37:43.426402    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:37:43.426413    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:37:43.426422    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:37:43.435083    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1212 12:37:43.446961    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1212 12:37:43.447825    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 12:37:43.447859    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 12:37:43.447876    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 12:37:43.447890    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 12:37:43.851276    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1212 12:37:43.851292    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1212 12:37:43.955344    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1212 12:37:43.955363    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1212 12:37:43.955371    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1212 12:37:43.955382    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1212 12:37:43.956213    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1212 12:37:43.956223    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1212 12:37:45.426012    8788 main.go:141] libmachine: (kindnet-183000) DBG | Attempt 1
	I1212 12:37:45.426031    8788 main.go:141] libmachine: (kindnet-183000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:37:45.426093    8788 main.go:141] libmachine: (kindnet-183000) DBG | hyperkit pid from json: 8797
	I1212 12:37:45.427056    8788 main.go:141] libmachine: (kindnet-183000) DBG | Searching for 1a:7f:b8:14:2a:8f in /var/db/dhcpd_leases ...
	I1212 12:37:45.427122    8788 main.go:141] libmachine: (kindnet-183000) DBG | Found 31 entries in /var/db/dhcpd_leases!
	I1212 12:37:45.427135    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:da:e2:d8:da:bf:c3 ID:1,da:e2:d8:da:bf:c3 Lease:0x657a15eb}
	I1212 12:37:45.427145    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:7e:a:9:f8:41:d6 ID:1,7e:a:9:f8:41:d6 Lease:0x657a15cd}
	I1212 12:37:45.427153    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:9a:f9:c3:85:71:d1 ID:1,9a:f9:c3:85:71:d1 Lease:0x6578c460}
	I1212 12:37:45.427170    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:4a:9e:77:f4:f6:17 ID:1,4a:9e:77:f4:f6:17 Lease:0x6578c435}
	I1212 12:37:45.427178    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:f6:71:6b:7f:e6:12 ID:1,f6:71:6b:7f:e6:12 Lease:0x657a1563}
	I1212 12:37:45.427186    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:36:a9:90:e6:5a ID:1,6:36:a9:90:e6:5a Lease:0x657a1551}
	I1212 12:37:45.427195    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:6a:f9:11:23:5f:c0 ID:1,6a:f9:11:23:5f:c0 Lease:0x657a1523}
	I1212 12:37:45.427202    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:1e:b8:3f:a6:ec:2d ID:1,1e:b8:3f:a6:ec:2d Lease:0x657a1438}
	I1212 12:37:45.427210    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:21:4e:2d:af:b4 ID:1,9a:21:4e:2d:af:b4 Lease:0x6578c29e}
	I1212 12:37:45.427220    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:b2:ee:c6:ac:a8:d7 ID:1,b2:ee:c6:ac:a8:d7 Lease:0x657a13ef}
	I1212 12:37:45.427227    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:de:3:d1:7e:56:cf ID:1,de:3:d1:7e:56:cf Lease:0x657a13d4}
	I1212 12:37:45.427237    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:46:a9:2d:43:e4:ab ID:1,46:a9:2d:43:e4:ab Lease:0x6578c264}
	I1212 12:37:45.427245    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:aa:d4:5f:30:b6:d3 ID:1,aa:d4:5f:30:b6:d3 Lease:0x657a13a9}
	I1212 12:37:45.427254    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:4a:4c:49:6:3c:3b ID:1,4a:4c:49:6:3c:3b Lease:0x657a1392}
	I1212 12:37:45.427261    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:16:f1:67:71:c6:1b ID:1,16:f1:67:71:c6:1b Lease:0x657a1326}
	I1212 12:37:45.427270    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:ea:5b:5a:8c:df:db ID:1,ea:5b:5a:8c:df:db Lease:0x657a12b9}
	I1212 12:37:45.427288    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a:be:a7:4:7b:59 ID:1,a:be:a7:4:7b:59 Lease:0x657a1280}
	I1212 12:37:45.427301    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ae:69:0:1f:74:78 ID:1,ae:69:0:1f:74:78 Lease:0x657a11c1}
	I1212 12:37:45.427311    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:b0:b0:c6:4d:99 ID:1,9a:b0:b0:c6:4d:99 Lease:0x657a119b}
	I1212 12:37:45.427321    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a1162}
	I1212 12:37:45.427330    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:37:45.427339    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:37:45.427348    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:37:45.427357    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:37:45.427366    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:37:45.427379    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:37:45.427388    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:37:45.427399    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:37:45.427407    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:37:45.427416    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:37:45.427427    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:37:47.428311    8788 main.go:141] libmachine: (kindnet-183000) DBG | Attempt 2
	I1212 12:37:47.428329    8788 main.go:141] libmachine: (kindnet-183000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:37:47.428423    8788 main.go:141] libmachine: (kindnet-183000) DBG | hyperkit pid from json: 8797
	I1212 12:37:47.429284    8788 main.go:141] libmachine: (kindnet-183000) DBG | Searching for 1a:7f:b8:14:2a:8f in /var/db/dhcpd_leases ...
	I1212 12:37:47.429346    8788 main.go:141] libmachine: (kindnet-183000) DBG | Found 31 entries in /var/db/dhcpd_leases!
	I1212 12:37:47.429360    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:da:e2:d8:da:bf:c3 ID:1,da:e2:d8:da:bf:c3 Lease:0x657a15eb}
	I1212 12:37:47.429373    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:7e:a:9:f8:41:d6 ID:1,7e:a:9:f8:41:d6 Lease:0x657a15cd}
	I1212 12:37:47.429382    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:9a:f9:c3:85:71:d1 ID:1,9a:f9:c3:85:71:d1 Lease:0x6578c460}
	I1212 12:37:47.429389    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:4a:9e:77:f4:f6:17 ID:1,4a:9e:77:f4:f6:17 Lease:0x6578c435}
	I1212 12:37:47.429397    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:f6:71:6b:7f:e6:12 ID:1,f6:71:6b:7f:e6:12 Lease:0x657a1563}
	I1212 12:37:47.429430    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:36:a9:90:e6:5a ID:1,6:36:a9:90:e6:5a Lease:0x657a1551}
	I1212 12:37:47.429440    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:6a:f9:11:23:5f:c0 ID:1,6a:f9:11:23:5f:c0 Lease:0x657a1523}
	I1212 12:37:47.429451    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:1e:b8:3f:a6:ec:2d ID:1,1e:b8:3f:a6:ec:2d Lease:0x657a1438}
	I1212 12:37:47.429458    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:21:4e:2d:af:b4 ID:1,9a:21:4e:2d:af:b4 Lease:0x6578c29e}
	I1212 12:37:47.429484    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:b2:ee:c6:ac:a8:d7 ID:1,b2:ee:c6:ac:a8:d7 Lease:0x657a13ef}
	I1212 12:37:47.429498    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:de:3:d1:7e:56:cf ID:1,de:3:d1:7e:56:cf Lease:0x657a13d4}
	I1212 12:37:47.429506    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:46:a9:2d:43:e4:ab ID:1,46:a9:2d:43:e4:ab Lease:0x6578c264}
	I1212 12:37:47.429517    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:aa:d4:5f:30:b6:d3 ID:1,aa:d4:5f:30:b6:d3 Lease:0x657a13a9}
	I1212 12:37:47.429529    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:4a:4c:49:6:3c:3b ID:1,4a:4c:49:6:3c:3b Lease:0x657a1392}
	I1212 12:37:47.429539    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:16:f1:67:71:c6:1b ID:1,16:f1:67:71:c6:1b Lease:0x657a1326}
	I1212 12:37:47.429549    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:ea:5b:5a:8c:df:db ID:1,ea:5b:5a:8c:df:db Lease:0x657a12b9}
	I1212 12:37:47.429556    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a:be:a7:4:7b:59 ID:1,a:be:a7:4:7b:59 Lease:0x657a1280}
	I1212 12:37:47.429565    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ae:69:0:1f:74:78 ID:1,ae:69:0:1f:74:78 Lease:0x657a11c1}
	I1212 12:37:47.429572    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:b0:b0:c6:4d:99 ID:1,9a:b0:b0:c6:4d:99 Lease:0x657a119b}
	I1212 12:37:47.429580    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a1162}
	I1212 12:37:47.429588    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:37:47.429598    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:37:47.429606    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:37:47.429621    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:37:47.429629    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:37:47.429637    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:37:47.429647    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:37:47.429657    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:37:47.429667    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:37:47.429676    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:37:47.429684    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:37:48.988605    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:48 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1212 12:37:48.988683    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:48 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1212 12:37:48.988692    8788 main.go:141] libmachine: (kindnet-183000) DBG | 2023/12/12 12:37:48 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1212 12:37:49.429559    8788 main.go:141] libmachine: (kindnet-183000) DBG | Attempt 3
	I1212 12:37:49.429576    8788 main.go:141] libmachine: (kindnet-183000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:37:49.429685    8788 main.go:141] libmachine: (kindnet-183000) DBG | hyperkit pid from json: 8797
	I1212 12:37:49.430587    8788 main.go:141] libmachine: (kindnet-183000) DBG | Searching for 1a:7f:b8:14:2a:8f in /var/db/dhcpd_leases ...
	I1212 12:37:49.430645    8788 main.go:141] libmachine: (kindnet-183000) DBG | Found 31 entries in /var/db/dhcpd_leases!
	I1212 12:37:49.430659    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:da:e2:d8:da:bf:c3 ID:1,da:e2:d8:da:bf:c3 Lease:0x657a15eb}
	I1212 12:37:49.430668    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:7e:a:9:f8:41:d6 ID:1,7e:a:9:f8:41:d6 Lease:0x657a15cd}
	I1212 12:37:49.430674    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:9a:f9:c3:85:71:d1 ID:1,9a:f9:c3:85:71:d1 Lease:0x6578c460}
	I1212 12:37:49.430710    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:4a:9e:77:f4:f6:17 ID:1,4a:9e:77:f4:f6:17 Lease:0x6578c435}
	I1212 12:37:49.430727    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:f6:71:6b:7f:e6:12 ID:1,f6:71:6b:7f:e6:12 Lease:0x657a1563}
	I1212 12:37:49.430741    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:36:a9:90:e6:5a ID:1,6:36:a9:90:e6:5a Lease:0x657a1551}
	I1212 12:37:49.430757    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:6a:f9:11:23:5f:c0 ID:1,6a:f9:11:23:5f:c0 Lease:0x657a1523}
	I1212 12:37:49.430767    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:1e:b8:3f:a6:ec:2d ID:1,1e:b8:3f:a6:ec:2d Lease:0x657a1438}
	I1212 12:37:49.430779    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:21:4e:2d:af:b4 ID:1,9a:21:4e:2d:af:b4 Lease:0x6578c29e}
	I1212 12:37:49.430791    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:b2:ee:c6:ac:a8:d7 ID:1,b2:ee:c6:ac:a8:d7 Lease:0x657a13ef}
	I1212 12:37:49.430806    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:de:3:d1:7e:56:cf ID:1,de:3:d1:7e:56:cf Lease:0x657a13d4}
	I1212 12:37:49.430818    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:46:a9:2d:43:e4:ab ID:1,46:a9:2d:43:e4:ab Lease:0x6578c264}
	I1212 12:37:49.430836    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:aa:d4:5f:30:b6:d3 ID:1,aa:d4:5f:30:b6:d3 Lease:0x657a13a9}
	I1212 12:37:49.430845    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:4a:4c:49:6:3c:3b ID:1,4a:4c:49:6:3c:3b Lease:0x657a1392}
	I1212 12:37:49.430855    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:16:f1:67:71:c6:1b ID:1,16:f1:67:71:c6:1b Lease:0x657a1326}
	I1212 12:37:49.430867    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:ea:5b:5a:8c:df:db ID:1,ea:5b:5a:8c:df:db Lease:0x657a12b9}
	I1212 12:37:49.430877    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a:be:a7:4:7b:59 ID:1,a:be:a7:4:7b:59 Lease:0x657a1280}
	I1212 12:37:49.430885    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ae:69:0:1f:74:78 ID:1,ae:69:0:1f:74:78 Lease:0x657a11c1}
	I1212 12:37:49.430893    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:b0:b0:c6:4d:99 ID:1,9a:b0:b0:c6:4d:99 Lease:0x657a119b}
	I1212 12:37:49.430902    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a1162}
	I1212 12:37:49.430917    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:37:49.430927    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:37:49.430987    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:37:49.431007    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:37:49.431016    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:37:49.431029    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:37:49.431037    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:37:49.431046    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:37:49.431053    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:37:49.431066    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:37:49.431082    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:37:51.432173    8788 main.go:141] libmachine: (kindnet-183000) DBG | Attempt 4
	I1212 12:37:51.432197    8788 main.go:141] libmachine: (kindnet-183000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:37:51.432269    8788 main.go:141] libmachine: (kindnet-183000) DBG | hyperkit pid from json: 8797
	I1212 12:37:51.433149    8788 main.go:141] libmachine: (kindnet-183000) DBG | Searching for 1a:7f:b8:14:2a:8f in /var/db/dhcpd_leases ...
	I1212 12:37:51.433203    8788 main.go:141] libmachine: (kindnet-183000) DBG | Found 31 entries in /var/db/dhcpd_leases!
	I1212 12:37:51.433219    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:da:e2:d8:da:bf:c3 ID:1,da:e2:d8:da:bf:c3 Lease:0x657a15eb}
	I1212 12:37:51.433230    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:7e:a:9:f8:41:d6 ID:1,7e:a:9:f8:41:d6 Lease:0x657a15cd}
	I1212 12:37:51.433248    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:9a:f9:c3:85:71:d1 ID:1,9a:f9:c3:85:71:d1 Lease:0x6578c460}
	I1212 12:37:51.433288    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:4a:9e:77:f4:f6:17 ID:1,4a:9e:77:f4:f6:17 Lease:0x6578c435}
	I1212 12:37:51.433304    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:f6:71:6b:7f:e6:12 ID:1,f6:71:6b:7f:e6:12 Lease:0x657a1563}
	I1212 12:37:51.433316    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6:36:a9:90:e6:5a ID:1,6:36:a9:90:e6:5a Lease:0x657a1551}
	I1212 12:37:51.433330    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:6a:f9:11:23:5f:c0 ID:1,6a:f9:11:23:5f:c0 Lease:0x657a1523}
	I1212 12:37:51.433342    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:1e:b8:3f:a6:ec:2d ID:1,1e:b8:3f:a6:ec:2d Lease:0x657a1438}
	I1212 12:37:51.433349    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:9a:21:4e:2d:af:b4 ID:1,9a:21:4e:2d:af:b4 Lease:0x6578c29e}
	I1212 12:37:51.433357    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:b2:ee:c6:ac:a8:d7 ID:1,b2:ee:c6:ac:a8:d7 Lease:0x657a13ef}
	I1212 12:37:51.433367    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:de:3:d1:7e:56:cf ID:1,de:3:d1:7e:56:cf Lease:0x657a13d4}
	I1212 12:37:51.433385    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:46:a9:2d:43:e4:ab ID:1,46:a9:2d:43:e4:ab Lease:0x6578c264}
	I1212 12:37:51.433401    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:aa:d4:5f:30:b6:d3 ID:1,aa:d4:5f:30:b6:d3 Lease:0x657a13a9}
	I1212 12:37:51.433414    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:4a:4c:49:6:3c:3b ID:1,4a:4c:49:6:3c:3b Lease:0x657a1392}
	I1212 12:37:51.433434    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:16:f1:67:71:c6:1b ID:1,16:f1:67:71:c6:1b Lease:0x657a1326}
	I1212 12:37:51.433444    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:ea:5b:5a:8c:df:db ID:1,ea:5b:5a:8c:df:db Lease:0x657a12b9}
	I1212 12:37:51.433458    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a:be:a7:4:7b:59 ID:1,a:be:a7:4:7b:59 Lease:0x657a1280}
	I1212 12:37:51.433473    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:ae:69:0:1f:74:78 ID:1,ae:69:0:1f:74:78 Lease:0x657a11c1}
	I1212 12:37:51.433483    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:9a:b0:b0:c6:4d:99 ID:1,9a:b0:b0:c6:4d:99 Lease:0x657a119b}
	I1212 12:37:51.433491    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:6:ed:17:4f:83:b2 ID:1,6:ed:17:4f:83:b2 Lease:0x657a1162}
	I1212 12:37:51.433498    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:61:fd:7b:ff:ad ID:1,d6:61:fd:7b:ff:ad Lease:0x6578bf04}
	I1212 12:37:51.433507    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:7e:4f:44:39:56:54 ID:1,7e:4f:44:39:56:54 Lease:0x6578bed7}
	I1212 12:37:51.433526    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:66:e9:56:a3:ac:b3 ID:1,66:e9:56:a3:ac:b3 Lease:0x6578beae}
	I1212 12:37:51.433544    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:3e:44:78:68:b1:3a ID:1,3e:44:78:68:b1:3a Lease:0x657a0fe7}
	I1212 12:37:51.433565    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:f3:3:99:96:a ID:1,1e:f3:3:99:96:a Lease:0x657a0fab}
	I1212 12:37:51.433578    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:a6:c:94:a4:bb:23 ID:1,a6:c:94:a4:bb:23 Lease:0x657a0f0c}
	I1212 12:37:51.433588    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:52:61:6b:49:5:19 ID:1,52:61:6b:49:5:19 Lease:0x6578bd76}
	I1212 12:37:51.433601    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:aa:4:40:f1:6c:89 ID:1,aa:4:40:f1:6c:89 Lease:0x657a0e0c}
	I1212 12:37:51.433610    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:32:8b:81:0:e0:a2 ID:1,32:8b:81:0:e0:a2 Lease:0x6578bc81}
	I1212 12:37:51.433622    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:e6:d:d2:81:4b ID:1,aa:e6:d:d2:81:4b Lease:0x657a0cd4}
	I1212 12:37:51.433638    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x657a0c44}
	I1212 12:37:53.433541    8788 main.go:141] libmachine: (kindnet-183000) DBG | Attempt 5
	I1212 12:37:53.433575    8788 main.go:141] libmachine: (kindnet-183000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:37:53.433699    8788 main.go:141] libmachine: (kindnet-183000) DBG | hyperkit pid from json: 8797
	I1212 12:37:53.434820    8788 main.go:141] libmachine: (kindnet-183000) DBG | Searching for 1a:7f:b8:14:2a:8f in /var/db/dhcpd_leases ...
	I1212 12:37:53.434896    8788 main.go:141] libmachine: (kindnet-183000) DBG | Found 32 entries in /var/db/dhcpd_leases!
	I1212 12:37:53.434914    8788 main.go:141] libmachine: (kindnet-183000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:1a:7f:b8:14:2a:8f ID:1,1a:7f:b8:14:2a:8f Lease:0x657a161f}
	I1212 12:37:53.434938    8788 main.go:141] libmachine: (kindnet-183000) DBG | Found match: 1a:7f:b8:14:2a:8f
	I1212 12:37:53.434953    8788 main.go:141] libmachine: (kindnet-183000) DBG | IP: 192.169.0.33
	I1212 12:37:53.435003    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetConfigRaw
	I1212 12:37:53.435772    8788 main.go:141] libmachine: (kindnet-183000) Calling .DriverName
	I1212 12:37:53.435934    8788 main.go:141] libmachine: (kindnet-183000) Calling .DriverName
	I1212 12:37:53.436084    8788 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 12:37:53.436098    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetState
	I1212 12:37:53.436181    8788 main.go:141] libmachine: (kindnet-183000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1212 12:37:53.436233    8788 main.go:141] libmachine: (kindnet-183000) DBG | hyperkit pid from json: 8797
	I1212 12:37:53.437058    8788 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 12:37:53.437069    8788 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 12:37:53.437081    8788 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 12:37:53.437091    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:53.437184    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:53.437276    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.437364    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.437449    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:53.437561    8788 main.go:141] libmachine: Using SSH client type: native
	I1212 12:37:53.437861    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 12:37:53.437869    8788 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 12:37:53.492053    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 12:37:53.492066    8788 main.go:141] libmachine: Detecting the provisioner...
	I1212 12:37:53.492073    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:53.492204    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:53.492298    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.492383    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.492455    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:53.492605    8788 main.go:141] libmachine: Using SSH client type: native
	I1212 12:37:53.492868    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 12:37:53.492877    8788 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 12:37:53.547863    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g0ec83c8-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1212 12:37:53.547931    8788 main.go:141] libmachine: found compatible host: buildroot
	I1212 12:37:53.547938    8788 main.go:141] libmachine: Provisioning with buildroot...
	I1212 12:37:53.547944    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetMachineName
	I1212 12:37:53.548086    8788 buildroot.go:166] provisioning hostname "kindnet-183000"
	I1212 12:37:53.548099    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetMachineName
	I1212 12:37:53.548210    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:53.548283    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:53.548370    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.548464    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.548550    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:53.548674    8788 main.go:141] libmachine: Using SSH client type: native
	I1212 12:37:53.548917    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 12:37:53.548926    8788 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-183000 && echo "kindnet-183000" | sudo tee /etc/hostname
	I1212 12:37:53.611681    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-183000
	
	I1212 12:37:53.611700    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:53.611824    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:53.611921    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.611996    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.612082    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:53.612198    8788 main.go:141] libmachine: Using SSH client type: native
	I1212 12:37:53.612451    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 12:37:53.612464    8788 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-183000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-183000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-183000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 12:37:53.672663    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 12:37:53.672682    8788 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17734-1975/.minikube CaCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17734-1975/.minikube}
	I1212 12:37:53.672697    8788 buildroot.go:174] setting up certificates
	I1212 12:37:53.672709    8788 provision.go:83] configureAuth start
	I1212 12:37:53.672716    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetMachineName
	I1212 12:37:53.672853    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetIP
	I1212 12:37:53.672975    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:53.673095    8788 provision.go:138] copyHostCerts
	I1212 12:37:53.673154    8788 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem, removing ...
	I1212 12:37:53.673164    8788 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem
	I1212 12:37:53.673317    8788 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.pem (1078 bytes)
	I1212 12:37:53.673669    8788 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem, removing ...
	I1212 12:37:53.673677    8788 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem
	I1212 12:37:53.673760    8788 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/cert.pem (1123 bytes)
	I1212 12:37:53.673940    8788 exec_runner.go:144] found /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem, removing ...
	I1212 12:37:53.673947    8788 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem
	I1212 12:37:53.674023    8788 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17734-1975/.minikube/key.pem (1675 bytes)
	I1212 12:37:53.674178    8788 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca-key.pem org=jenkins.kindnet-183000 san=[192.169.0.33 192.169.0.33 localhost 127.0.0.1 minikube kindnet-183000]
	I1212 12:37:53.814986    8788 provision.go:172] copyRemoteCerts
	I1212 12:37:53.815045    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 12:37:53.815063    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:53.815210    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:53.815299    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.815383    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:53.815471    8788 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/id_rsa Username:docker}
	I1212 12:37:53.848849    8788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 12:37:53.864130    8788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 12:37:53.879758    8788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 12:37:53.895694    8788 provision.go:86] duration metric: configureAuth took 222.971691ms
	I1212 12:37:53.895708    8788 buildroot.go:189] setting minikube options for container-runtime
	I1212 12:37:53.895837    8788 config.go:182] Loaded profile config "kindnet-183000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:37:53.895852    8788 main.go:141] libmachine: (kindnet-183000) Calling .DriverName
	I1212 12:37:53.895991    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:53.896076    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:53.896148    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.896238    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.896326    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:53.896444    8788 main.go:141] libmachine: Using SSH client type: native
	I1212 12:37:53.896694    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 12:37:53.896703    8788 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1212 12:37:53.952844    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1212 12:37:53.952856    8788 buildroot.go:70] root file system type: tmpfs
	I1212 12:37:53.952931    8788 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1212 12:37:53.952943    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:53.953074    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:53.953170    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.953252    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:53.953341    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:53.953450    8788 main.go:141] libmachine: Using SSH client type: native
	I1212 12:37:53.953698    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 12:37:53.953746    8788 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1212 12:37:54.016793    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1212 12:37:54.016816    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:54.016960    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:54.017063    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:54.017162    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:54.017250    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:54.017391    8788 main.go:141] libmachine: Using SSH client type: native
	I1212 12:37:54.017648    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 12:37:54.017662    8788 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1212 12:37:54.529566    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1212 12:37:54.529586    8788 main.go:141] libmachine: Checking connection to Docker...
	I1212 12:37:54.529593    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetURL
	I1212 12:37:54.529746    8788 main.go:141] libmachine: Docker is up and running!
	I1212 12:37:54.529755    8788 main.go:141] libmachine: Reticulating splines...
	I1212 12:37:54.529762    8788 client.go:171] LocalClient.Create took 11.949253972s
	I1212 12:37:54.529775    8788 start.go:167] duration metric: libmachine.API.Create for "kindnet-183000" took 11.949303548s
	I1212 12:37:54.529782    8788 start.go:300] post-start starting for "kindnet-183000" (driver="hyperkit")
	I1212 12:37:54.529790    8788 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 12:37:54.529803    8788 main.go:141] libmachine: (kindnet-183000) Calling .DriverName
	I1212 12:37:54.529956    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 12:37:54.529979    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:54.530068    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:54.530150    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:54.530254    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:54.530345    8788 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/id_rsa Username:docker}
	I1212 12:37:54.562634    8788 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 12:37:54.565229    8788 info.go:137] Remote host: Buildroot 2021.02.12
	I1212 12:37:54.565244    8788 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/addons for local assets ...
	I1212 12:37:54.565345    8788 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17734-1975/.minikube/files for local assets ...
	I1212 12:37:54.565527    8788 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem -> 31982.pem in /etc/ssl/certs
	I1212 12:37:54.565739    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 12:37:54.571443    8788 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/ssl/certs/31982.pem --> /etc/ssl/certs/31982.pem (1708 bytes)
	I1212 12:37:54.589565    8788 start.go:303] post-start completed in 59.77552ms
	I1212 12:37:54.589597    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetConfigRaw
	I1212 12:37:54.590163    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetIP
	I1212 12:37:54.590318    8788 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kindnet-183000/config.json ...
	I1212 12:37:54.590640    8788 start.go:128] duration metric: createHost completed in 12.042146694s
	I1212 12:37:54.590657    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:54.590746    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:54.590829    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:54.590903    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:54.590977    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:54.591065    8788 main.go:141] libmachine: Using SSH client type: native
	I1212 12:37:54.591301    8788 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406180] 0x1408e60 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I1212 12:37:54.591308    8788 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 12:37:54.646293    8788 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702413474.776849176
	
	I1212 12:37:54.646307    8788 fix.go:206] guest clock: 1702413474.776849176
	I1212 12:37:54.646314    8788 fix.go:219] Guest: 2023-12-12 12:37:54.776849176 -0800 PST Remote: 2023-12-12 12:37:54.590649 -0800 PST m=+12.788899744 (delta=186.200176ms)
	I1212 12:37:54.646341    8788 fix.go:190] guest clock delta is within tolerance: 186.200176ms
	I1212 12:37:54.646345    8788 start.go:83] releasing machines lock for "kindnet-183000", held for 12.097929672s
	I1212 12:37:54.646366    8788 main.go:141] libmachine: (kindnet-183000) Calling .DriverName
	I1212 12:37:54.646498    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetIP
	I1212 12:37:54.646589    8788 main.go:141] libmachine: (kindnet-183000) Calling .DriverName
	I1212 12:37:54.646873    8788 main.go:141] libmachine: (kindnet-183000) Calling .DriverName
	I1212 12:37:54.646971    8788 main.go:141] libmachine: (kindnet-183000) Calling .DriverName
	I1212 12:37:54.647051    8788 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 12:37:54.647076    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:54.647093    8788 ssh_runner.go:195] Run: cat /version.json
	I1212 12:37:54.647106    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHHostname
	I1212 12:37:54.647157    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:54.647189    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHPort
	I1212 12:37:54.647245    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:54.647299    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHKeyPath
	I1212 12:37:54.647349    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:54.647379    8788 main.go:141] libmachine: (kindnet-183000) Calling .GetSSHUsername
	I1212 12:37:54.647424    8788 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/id_rsa Username:docker}
	I1212 12:37:54.647449    8788 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/kindnet-183000/id_rsa Username:docker}
	I1212 12:37:54.726280    8788 ssh_runner.go:195] Run: systemctl --version
	I1212 12:37:54.730984    8788 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 12:37:54.735376    8788 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 12:37:54.735448    8788 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 12:37:54.747041    8788 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 12:37:54.747060    8788 start.go:475] detecting cgroup driver to use...
	I1212 12:37:54.747173    8788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:37:54.760704    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1212 12:37:54.768344    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1212 12:37:54.776660    8788 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1212 12:37:54.776734    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1212 12:37:54.785112    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:37:54.793010    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1212 12:37:54.800988    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1212 12:37:54.808748    8788 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 12:37:54.816496    8788 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1212 12:37:54.824832    8788 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 12:37:54.832548    8788 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 12:37:54.839970    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:37:54.927196    8788 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1212 12:37:54.941859    8788 start.go:475] detecting cgroup driver to use...
	I1212 12:37:54.941945    8788 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1212 12:37:54.951728    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:37:54.962365    8788 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 12:37:54.977635    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 12:37:54.987297    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:37:54.995594    8788 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1212 12:37:55.013616    8788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1212 12:37:55.023959    8788 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 12:37:55.038312    8788 ssh_runner.go:195] Run: which cri-dockerd
	I1212 12:37:55.041282    8788 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1212 12:37:55.048800    8788 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1212 12:37:55.061837    8788 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1212 12:37:55.155106    8788 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1212 12:37:55.241539    8788 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1212 12:37:55.241623    8788 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1212 12:37:55.253366    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:37:55.337684    8788 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1212 12:37:56.615895    8788 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.278207769s)
	I1212 12:37:56.615960    8788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:37:56.711416    8788 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1212 12:37:56.803088    8788 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1212 12:37:56.909764    8788 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 12:37:56.998947    8788 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1212 12:37:57.010692    8788 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1212 12:37:57.040155    8788 out.go:177] 
	W1212 12:37:57.061028    8788 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 20:37:50 UTC, ends at Tue 2023-12-12 20:37:57 UTC. --
	Dec 12 20:37:51 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:37:51 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:37:54 kindnet-183000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:37:54 kindnet-183000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:37:54 kindnet-183000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:37:54 kindnet-183000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:37:54 kindnet-183000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:37:57 kindnet-183000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:37:57 kindnet-183000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:37:57 kindnet-183000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:37:57 kindnet-183000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 20:37:57 kindnet-183000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-12 20:37:50 UTC, ends at Tue 2023-12-12 20:37:57 UTC. --
	Dec 12 20:37:51 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:37:51 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:37:54 kindnet-183000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:37:54 kindnet-183000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:37:54 kindnet-183000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:37:54 kindnet-183000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 12 20:37:54 kindnet-183000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 12 20:37:57 kindnet-183000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 12 20:37:57 kindnet-183000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 12 20:37:57 kindnet-183000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 12 20:37:57 kindnet-183000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 12 20:37:57 kindnet-183000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1212 12:37:57.061053    8788 out.go:239] * 
	* 
	W1212 12:37:57.061708    8788 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 12:37:57.124926    8788 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/kindnet/Start (15.39s)

                                                
                                    

Test pass (279/323)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 52.91
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.35
10 TestDownloadOnly/v1.28.4/json-events 41.64
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.34
17 TestDownloadOnly/v1.29.0-rc.2/json-events 17.86
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.3
23 TestDownloadOnly/DeleteAll 0.43
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
26 TestBinaryMirror 1.18
27 TestOffline 54.83
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
32 TestAddons/Setup 135.49
34 TestAddons/parallel/Registry 14.14
35 TestAddons/parallel/Ingress 20.63
36 TestAddons/parallel/InspektorGadget 10.51
37 TestAddons/parallel/MetricsServer 5.5
38 TestAddons/parallel/HelmTiller 10.33
40 TestAddons/parallel/CSI 61.68
41 TestAddons/parallel/Headlamp 13.26
42 TestAddons/parallel/CloudSpanner 5.46
43 TestAddons/parallel/LocalPath 56.62
44 TestAddons/parallel/NvidiaDevicePlugin 5.4
47 TestAddons/serial/GCPAuth/Namespaces 0.1
48 TestAddons/StoppedEnableDisable 5.78
49 TestCertOptions 40.42
50 TestCertExpiration 247.29
51 TestDockerFlags 47.74
52 TestForceSystemdFlag 38.97
53 TestForceSystemdEnv 40.28
56 TestHyperKitDriverInstallOrUpdate 6.49
59 TestErrorSpam/setup 34.22
60 TestErrorSpam/start 1.64
61 TestErrorSpam/status 0.5
62 TestErrorSpam/pause 1.31
63 TestErrorSpam/unpause 1.28
64 TestErrorSpam/stop 3.68
67 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.38
76 TestFunctional/serial/CacheCmd/cache/add_local 1.58
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
78 TestFunctional/serial/CacheCmd/cache/list 0.08
81 TestFunctional/serial/CacheCmd/cache/delete 0.17
84 TestFunctional/serial/ExtraConfig 76.83
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 2.17
87 TestFunctional/serial/LogsFileCmd 2.4
88 TestFunctional/serial/InvalidService 4.89
90 TestFunctional/parallel/ConfigCmd 0.54
91 TestFunctional/parallel/DashboardCmd 13.72
92 TestFunctional/parallel/DryRun 1.48
93 TestFunctional/parallel/InternationalLanguage 0.59
94 TestFunctional/parallel/StatusCmd 0.51
98 TestFunctional/parallel/ServiceCmdConnect 8.61
99 TestFunctional/parallel/AddonsCmd 0.32
100 TestFunctional/parallel/PersistentVolumeClaim 26.42
102 TestFunctional/parallel/SSHCmd 0.3
103 TestFunctional/parallel/CpCmd 1.17
104 TestFunctional/parallel/MySQL 26.75
105 TestFunctional/parallel/FileSync 0.18
106 TestFunctional/parallel/CertSync 1.32
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
114 TestFunctional/parallel/License 0.52
115 TestFunctional/parallel/Version/short 0.13
116 TestFunctional/parallel/Version/components 0.46
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.17
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.18
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.18
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.22
122 TestFunctional/parallel/ImageCommands/Setup 2.48
123 TestFunctional/parallel/DockerEnv/bash 0.92
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.24
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.3
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.42
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.22
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.38
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.38
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.32
134 TestFunctional/parallel/ServiceCmd/DeployApp 12.13
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.39
137 TestFunctional/parallel/ServiceCmd/List 0.19
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.18
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
143 TestFunctional/parallel/ServiceCmd/Format 0.28
144 TestFunctional/parallel/ServiceCmd/URL 0.26
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
149 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
152 TestFunctional/parallel/ProfileCmd/profile_list 0.29
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
154 TestFunctional/parallel/MountCmd/any-port 6.04
155 TestFunctional/parallel/MountCmd/specific-port 2.66
156 TestFunctional/parallel/MountCmd/VerifyCleanup 2.15
157 TestFunctional/delete_addon-resizer_images 0.13
158 TestFunctional/delete_my-image_image 0.05
159 TestFunctional/delete_minikube_cached_images 0.05
163 TestImageBuild/serial/Setup 37.84
164 TestImageBuild/serial/NormalBuild 1.28
165 TestImageBuild/serial/BuildWithBuildArg 0.77
166 TestImageBuild/serial/BuildWithDockerIgnore 0.25
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.22
170 TestIngressAddonLegacy/StartLegacyK8sCluster 102.48
172 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.39
173 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.56
174 TestIngressAddonLegacy/serial/ValidateIngressAddons 47.17
177 TestJSONOutput/start/Command 48.98
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 0.48
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 0.43
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 8.16
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 0.9
205 TestMainNoArgs 0.08
206 TestMinikubeProfile 87.91
209 TestMountStart/serial/StartWithMountFirst 16.75
210 TestMountStart/serial/VerifyMountFirst 0.32
211 TestMountStart/serial/StartWithMountSecond 15.89
212 TestMountStart/serial/VerifyMountSecond 0.31
213 TestMountStart/serial/DeleteFirst 2.38
214 TestMountStart/serial/VerifyMountPostDelete 0.31
215 TestMountStart/serial/Stop 2.23
216 TestMountStart/serial/RestartStopped 40.25
217 TestMountStart/serial/VerifyMountPostStop 0.3
229 TestMultiNode/serial/RestartKeepsNodes 61.71
237 TestPreload 193.96
239 TestScheduledStopUnix 105.56
240 TestSkaffold 110.85
243 TestRunningBinaryUpgrade 172.58
245 TestKubernetesUpgrade 142.09
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.55
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.75
260 TestStoppedBinaryUpgrade/Setup 1.55
261 TestStoppedBinaryUpgrade/Upgrade 156.93
263 TestPause/serial/Start 49.12
264 TestStoppedBinaryUpgrade/MinikubeLogs 3
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
274 TestNoKubernetes/serial/StartWithK8s 36.97
275 TestPause/serial/SecondStartNoReconfiguration 41.25
276 TestNoKubernetes/serial/StartWithStopK8s 16.61
277 TestPause/serial/Pause 0.56
278 TestPause/serial/VerifyStatus 0.17
279 TestPause/serial/Unpause 0.53
280 TestPause/serial/PauseAgain 0.64
281 TestPause/serial/DeletePaused 5.28
282 TestNoKubernetes/serial/Start 15.86
283 TestPause/serial/VerifyDeletedResources 0.23
284 TestNetworkPlugins/group/auto/Start 60.22
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.14
286 TestNoKubernetes/serial/ProfileList 0.55
287 TestNoKubernetes/serial/Stop 2.24
288 TestNoKubernetes/serial/StartNoArgs 21.19
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
290 TestNetworkPlugins/group/flannel/Start 59.56
291 TestNetworkPlugins/group/auto/KubeletFlags 0.19
292 TestNetworkPlugins/group/auto/NetCatPod 11.21
293 TestNetworkPlugins/group/auto/DNS 0.14
294 TestNetworkPlugins/group/auto/Localhost 0.11
295 TestNetworkPlugins/group/auto/HairPin 0.11
297 TestNetworkPlugins/group/flannel/ControllerPod 5.01
298 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
299 TestNetworkPlugins/group/flannel/NetCatPod 12.2
300 TestNetworkPlugins/group/flannel/DNS 0.13
301 TestNetworkPlugins/group/flannel/Localhost 0.11
302 TestNetworkPlugins/group/flannel/HairPin 0.11
303 TestNetworkPlugins/group/enable-default-cni/Start 90.09
304 TestNetworkPlugins/group/bridge/Start 49.02
305 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
306 TestNetworkPlugins/group/bridge/NetCatPod 13.2
307 TestNetworkPlugins/group/bridge/DNS 0.12
308 TestNetworkPlugins/group/bridge/Localhost 0.12
309 TestNetworkPlugins/group/bridge/HairPin 0.1
310 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.15
311 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.19
312 TestNetworkPlugins/group/kubenet/Start 48.34
313 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
314 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
315 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
316 TestNetworkPlugins/group/custom-flannel/Start 59.08
317 TestNetworkPlugins/group/kubenet/KubeletFlags 0.2
318 TestNetworkPlugins/group/kubenet/NetCatPod 12.2
319 TestNetworkPlugins/group/kubenet/DNS 32.09
320 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.16
321 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.19
322 TestNetworkPlugins/group/kubenet/Localhost 0.11
323 TestNetworkPlugins/group/kubenet/HairPin 0.1
324 TestNetworkPlugins/group/custom-flannel/DNS 0.12
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
327 TestNetworkPlugins/group/calico/Start 71.98
328 TestNetworkPlugins/group/false/Start 59.82
329 TestNetworkPlugins/group/false/KubeletFlags 0.18
330 TestNetworkPlugins/group/false/NetCatPod 12.2
331 TestNetworkPlugins/group/calico/ControllerPod 5.02
332 TestNetworkPlugins/group/calico/KubeletFlags 0.17
333 TestNetworkPlugins/group/false/DNS 0.13
334 TestNetworkPlugins/group/false/Localhost 0.13
335 TestNetworkPlugins/group/calico/NetCatPod 10.25
336 TestNetworkPlugins/group/false/HairPin 0.12
337 TestNetworkPlugins/group/calico/DNS 0.12
338 TestNetworkPlugins/group/calico/Localhost 0.11
339 TestNetworkPlugins/group/calico/HairPin 0.17
341 TestStartStop/group/old-k8s-version/serial/FirstStart 152.93
343 TestStartStop/group/no-preload/serial/FirstStart 84.99
344 TestStartStop/group/no-preload/serial/DeployApp 8.57
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.75
346 TestStartStop/group/no-preload/serial/Stop 8.26
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.31
348 TestStartStop/group/no-preload/serial/SecondStart 302.52
349 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
350 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.68
351 TestStartStop/group/old-k8s-version/serial/Stop 8.3
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.33
353 TestStartStop/group/old-k8s-version/serial/SecondStart 487.29
354 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
355 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
356 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
357 TestStartStop/group/no-preload/serial/Pause 1.91
359 TestStartStop/group/embed-certs/serial/FirstStart 50.2
360 TestStartStop/group/embed-certs/serial/DeployApp 8.29
361 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.82
362 TestStartStop/group/embed-certs/serial/Stop 8.29
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
364 TestStartStop/group/embed-certs/serial/SecondStart 298.52
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.16
368 TestStartStop/group/old-k8s-version/serial/Pause 1.74
370 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.21
371 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.86
373 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.27
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.33
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 298.9
376 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
378 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
379 TestStartStop/group/embed-certs/serial/Pause 1.95
381 TestStartStop/group/newest-cni/serial/FirstStart 46.23
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.22
384 TestStartStop/group/newest-cni/serial/Stop 8.26
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.31
386 TestStartStop/group/newest-cni/serial/SecondStart 37.14
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.16
390 TestStartStop/group/newest-cni/serial/Pause 1.75
391 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
392 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
393 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.16
394 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.88
x
+
TestDownloadOnly/v1.16.0/json-events (52.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-089000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-089000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (52.912250326s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (52.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-089000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-089000: exit status 85 (352.670638ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-089000 | jenkins | v1.32.0 | 12 Dec 23 11:56 PST |          |
	|         | -p download-only-089000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 11:56:05
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 11:56:05.749340    3200 out.go:296] Setting OutFile to fd 1 ...
	I1212 11:56:05.749631    3200 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 11:56:05.749636    3200 out.go:309] Setting ErrFile to fd 2...
	I1212 11:56:05.749640    3200 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 11:56:05.749817    3200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	W1212 11:56:05.749916    3200 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17734-1975/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17734-1975/.minikube/config/config.json: no such file or directory
	I1212 11:56:05.751689    3200 out.go:303] Setting JSON to true
	I1212 11:56:05.776901    3200 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1536,"bootTime":1702409429,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 11:56:05.777001    3200 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 11:56:05.801394    3200 out.go:97] [download-only-089000] minikube v1.32.0 on Darwin 14.2
	I1212 11:56:05.822610    3200 out.go:169] MINIKUBE_LOCATION=17734
	I1212 11:56:05.801572    3200 notify.go:220] Checking for updates...
	W1212 11:56:05.801576    3200 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 11:56:05.872548    3200 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 11:56:05.893839    3200 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 11:56:05.914634    3200 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 11:56:05.936616    3200 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	W1212 11:56:05.978541    3200 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 11:56:05.979004    3200 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 11:56:06.057518    3200 out.go:97] Using the hyperkit driver based on user configuration
	I1212 11:56:06.057628    3200 start.go:298] selected driver: hyperkit
	I1212 11:56:06.057641    3200 start.go:902] validating driver "hyperkit" against <nil>
	I1212 11:56:06.058034    3200 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 11:56:06.058359    3200 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17734-1975/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 11:56:06.170004    3200 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 11:56:06.175307    3200 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 11:56:06.175328    3200 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 11:56:06.175367    3200 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1212 11:56:06.180063    3200 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1212 11:56:06.180224    3200 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 11:56:06.180285    3200 cni.go:84] Creating CNI manager for ""
	I1212 11:56:06.180299    3200 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1212 11:56:06.180309    3200 start_flags.go:323] config:
	{Name:download-only-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 11:56:06.180579    3200 iso.go:125] acquiring lock: {Name:mkd640d41cda61c79a7d2c2e38355d745b556a2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 11:56:06.202221    3200 out.go:97] Downloading VM boot image ...
	I1212 11:56:06.202323    3200 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/iso/amd64/minikube-v1.32.1-1701996673-17738-amd64.iso
	I1212 11:56:14.058278    3200 out.go:97] Starting control plane node download-only-089000 in cluster download-only-089000
	I1212 11:56:14.058322    3200 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 11:56:14.112686    3200 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1212 11:56:14.112718    3200 cache.go:56] Caching tarball of preloaded images
	I1212 11:56:14.113017    3200 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 11:56:14.133540    3200 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1212 11:56:14.133569    3200 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 11:56:14.213047    3200 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1212 11:56:23.918924    3200 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 11:56:23.919112    3200 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1212 11:56:24.526218    3200 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1212 11:56:24.526455    3200 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/download-only-089000/config.json ...
	I1212 11:56:24.526478    3200 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/download-only-089000/config.json: {Name:mkb01ea7baa241fa38bbdde703824b6acf865f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 11:56:24.526776    3200 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1212 11:56:24.527068    3200 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-089000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (41.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-089000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-089000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperkit : (41.643095116s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (41.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-089000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-089000: exit status 85 (335.120115ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-089000 | jenkins | v1.32.0 | 12 Dec 23 11:56 PST |          |
	|         | -p download-only-089000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-089000 | jenkins | v1.32.0 | 12 Dec 23 11:56 PST |          |
	|         | -p download-only-089000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 11:56:59
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 11:56:59.016896    3287 out.go:296] Setting OutFile to fd 1 ...
	I1212 11:56:59.017197    3287 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 11:56:59.017202    3287 out.go:309] Setting ErrFile to fd 2...
	I1212 11:56:59.017207    3287 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 11:56:59.017390    3287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	W1212 11:56:59.017483    3287 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17734-1975/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17734-1975/.minikube/config/config.json: no such file or directory
	I1212 11:56:59.018769    3287 out.go:303] Setting JSON to true
	I1212 11:56:59.041155    3287 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1590,"bootTime":1702409429,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 11:56:59.041257    3287 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 11:56:59.063218    3287 out.go:97] [download-only-089000] minikube v1.32.0 on Darwin 14.2
	I1212 11:56:59.084843    3287 out.go:169] MINIKUBE_LOCATION=17734
	I1212 11:56:59.063342    3287 notify.go:220] Checking for updates...
	I1212 11:56:59.126803    3287 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 11:56:59.168831    3287 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 11:56:59.210840    3287 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 11:56:59.231614    3287 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	W1212 11:56:59.273810    3287 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 11:56:59.274509    3287 config.go:182] Loaded profile config "download-only-089000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1212 11:56:59.274589    3287 start.go:810] api.Load failed for download-only-089000: filestore "download-only-089000": Docker machine "download-only-089000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 11:56:59.274748    3287 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 11:56:59.274792    3287 start.go:810] api.Load failed for download-only-089000: filestore "download-only-089000": Docker machine "download-only-089000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 11:56:59.303590    3287 out.go:97] Using the hyperkit driver based on existing profile
	I1212 11:56:59.303644    3287 start.go:298] selected driver: hyperkit
	I1212 11:56:59.303656    3287 start.go:902] validating driver "hyperkit" against &{Name:download-only-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 11:56:59.303971    3287 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 11:56:59.304114    3287 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17734-1975/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 11:56:59.313429    3287 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 11:56:59.317199    3287 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 11:56:59.317229    3287 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 11:56:59.320040    3287 cni.go:84] Creating CNI manager for ""
	I1212 11:56:59.320063    3287 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 11:56:59.320078    3287 start_flags.go:323] config:
	{Name:download-only-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-089000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 11:56:59.320234    3287 iso.go:125] acquiring lock: {Name:mkd640d41cda61c79a7d2c2e38355d745b556a2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 11:56:59.340746    3287 out.go:97] Starting control plane node download-only-089000 in cluster download-only-089000
	I1212 11:56:59.340762    3287 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 11:56:59.397860    3287 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 11:56:59.397922    3287 cache.go:56] Caching tarball of preloaded images
	I1212 11:56:59.398177    3287 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 11:56:59.419520    3287 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1212 11:56:59.419534    3287 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1212 11:56:59.501301    3287 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1212 11:57:06.582097    3287 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1212 11:57:06.582259    3287 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1212 11:57:07.214685    3287 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1212 11:57:07.214770    3287 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/download-only-089000/config.json ...
	I1212 11:57:07.215128    3287 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1212 11:57:07.215343    3287 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-089000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (17.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-089000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-089000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperkit : (17.855936621s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (17.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-089000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-089000: exit status 85 (295.46491ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-089000 | jenkins | v1.32.0 | 12 Dec 23 11:56 PST |          |
	|         | -p download-only-089000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-089000 | jenkins | v1.32.0 | 12 Dec 23 11:56 PST |          |
	|         | -p download-only-089000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-089000 | jenkins | v1.32.0 | 12 Dec 23 11:57 PST |          |
	|         | -p download-only-089000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/12 11:57:40
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 11:57:40.995589    3365 out.go:296] Setting OutFile to fd 1 ...
	I1212 11:57:40.995831    3365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 11:57:40.995837    3365 out.go:309] Setting ErrFile to fd 2...
	I1212 11:57:40.995841    3365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 11:57:40.996023    3365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	W1212 11:57:40.996154    3365 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17734-1975/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17734-1975/.minikube/config/config.json: no such file or directory
	I1212 11:57:40.997799    3365 out.go:303] Setting JSON to true
	I1212 11:57:41.020458    3365 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1632,"bootTime":1702409429,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 11:57:41.020564    3365 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 11:57:41.041836    3365 out.go:97] [download-only-089000] minikube v1.32.0 on Darwin 14.2
	I1212 11:57:41.062657    3365 out.go:169] MINIKUBE_LOCATION=17734
	I1212 11:57:41.041994    3365 notify.go:220] Checking for updates...
	I1212 11:57:41.104557    3365 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 11:57:41.125782    3365 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 11:57:41.146529    3365 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 11:57:41.167717    3365 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	W1212 11:57:41.209613    3365 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 11:57:41.210024    3365 config.go:182] Loaded profile config "download-only-089000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1212 11:57:41.210072    3365 start.go:810] api.Load failed for download-only-089000: filestore "download-only-089000": Docker machine "download-only-089000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 11:57:41.210158    3365 driver.go:392] Setting default libvirt URI to qemu:///system
	W1212 11:57:41.210179    3365 start.go:810] api.Load failed for download-only-089000: filestore "download-only-089000": Docker machine "download-only-089000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1212 11:57:41.238595    3365 out.go:97] Using the hyperkit driver based on existing profile
	I1212 11:57:41.238629    3365 start.go:298] selected driver: hyperkit
	I1212 11:57:41.238636    3365 start.go:902] validating driver "hyperkit" against &{Name:download-only-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 11:57:41.238830    3365 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 11:57:41.238941    3365 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17734-1975/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1212 11:57:41.247452    3365 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1212 11:57:41.251525    3365 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 11:57:41.251547    3365 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1212 11:57:41.254485    3365 cni.go:84] Creating CNI manager for ""
	I1212 11:57:41.254509    3365 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1212 11:57:41.254524    3365 start_flags.go:323] config:
	{Name:download-only-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-089000 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 11:57:41.254709    3365 iso.go:125] acquiring lock: {Name:mkd640d41cda61c79a7d2c2e38355d745b556a2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 11:57:41.275714    3365 out.go:97] Starting control plane node download-only-089000 in cluster download-only-089000
	I1212 11:57:41.275735    3365 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 11:57:41.325609    3365 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1212 11:57:41.325640    3365 cache.go:56] Caching tarball of preloaded images
	I1212 11:57:41.325954    3365 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1212 11:57:41.347493    3365 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1212 11:57:41.347509    3365 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1212 11:57:41.430138    3365 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:d472e9d5f1548dd0d68eb75b714c5436 -> /Users/jenkins/minikube-integration/17734-1975/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-089000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.43s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-089000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestBinaryMirror (1.18s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-587000 --alsologtostderr --binary-mirror http://127.0.0.1:49365 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-587000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-587000
--- PASS: TestBinaryMirror (1.18s)

                                                
                                    
x
+
TestOffline (54.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-569000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-569000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (49.544834407s)
helpers_test.go:175: Cleaning up "offline-docker-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-569000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-569000: (5.288059682s)
--- PASS: TestOffline (54.83s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-572000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-572000: exit status 85 (194.498169ms)

                                                
                                                
-- stdout --
	* Profile "addons-572000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-572000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-572000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-572000: exit status 85 (174.628611ms)

                                                
                                                
-- stdout --
	* Profile "addons-572000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-572000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/Setup (135.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-572000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-572000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m15.49042892s)
--- PASS: TestAddons/Setup (135.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 9.71504ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-n6pdf" [11127c33-f559-4d84-b5be-b91cec7d97ed] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010259995s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kqpgx" [1fab937b-4447-4fe3-839f-9ac4b34cbf9c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008245787s
addons_test.go:339: (dbg) Run:  kubectl --context addons-572000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-572000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-572000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.420604411s)
addons_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 ip
addons_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-572000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-572000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-572000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1e87b363-3693-4156-97f6-ed6d8553d35b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1e87b363-3693-4156-97f6-ed6d8553d35b] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.009683625s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-572000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.169.0.3
addons_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-darwin-amd64 -p addons-572000 addons disable ingress-dns --alsologtostderr -v=1: (1.653622635s)
addons_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p addons-572000 addons disable ingress --alsologtostderr -v=1: (7.633610026s)
--- PASS: TestAddons/parallel/Ingress (20.63s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.51s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2b5xn" [542b70f4-18e0-43ed-9194-e32ed1344778] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00948152s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-572000
addons_test.go:840: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-572000: (5.498711565s)
--- PASS: TestAddons/parallel/InspektorGadget (10.51s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.403042ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-qcdkb" [dce0da88-15d6-4d09-b50f-71e0a1ddcfc5] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009639946s
addons_test.go:414: (dbg) Run:  kubectl --context addons-572000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.50s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.33s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 2.024689ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-dpbhp" [2c57e347-271a-44b1-82f7-f366877cf3f8] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009104954s
addons_test.go:472: (dbg) Run:  kubectl --context addons-572000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-572000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.878738314s)
addons_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 11.986952ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-572000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/12/12 12:00:30 [DEBUG] GET http://192.169.0.3:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-572000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ea24a3a6-b4cd-40d5-80ab-c32546604b91] Pending
helpers_test.go:344: "task-pv-pod" [ea24a3a6-b4cd-40d5-80ab-c32546604b91] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ea24a3a6-b4cd-40d5-80ab-c32546604b91] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.012968532s
addons_test.go:583: (dbg) Run:  kubectl --context addons-572000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-572000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-572000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-572000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-572000 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-572000 delete pod task-pv-pod: (1.039801262s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-572000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-572000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-572000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0e71b5ab-52db-4742-9a32-a44efee56117] Pending
helpers_test.go:344: "task-pv-pod-restore" [0e71b5ab-52db-4742-9a32-a44efee56117] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0e71b5ab-52db-4742-9a32-a44efee56117] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.019060163s
addons_test.go:625: (dbg) Run:  kubectl --context addons-572000 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-572000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-572000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-amd64 -p addons-572000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.423930883s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-572000 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-572000 --alsologtostderr -v=1: (1.240484944s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-8mnd8" [62b896c9-6a04-4fe3-84e5-3f3e530233a5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-8mnd8" [62b896c9-6a04-4fe3-84e5-3f3e530233a5] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.017045644s
--- PASS: TestAddons/parallel/Headlamp (13.26s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-997w4" [cbe24185-ff7a-4d03-a00a-983f4d2cb809] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008175276s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-572000
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-572000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-572000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-572000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e467cf3c-fae4-4447-8926-69a094b1b669] Pending
helpers_test.go:344: "test-local-path" [e467cf3c-fae4-4447-8926-69a094b1b669] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e467cf3c-fae4-4447-8926-69a094b1b669] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e467cf3c-fae4-4447-8926-69a094b1b669] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.008161818s
addons_test.go:890: (dbg) Run:  kubectl --context addons-572000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 ssh "cat /opt/local-path-provisioner/pvc-2ce3bae1-d712-4373-b21f-d7d2c12b7391_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-572000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-572000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-amd64 -p addons-572000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-darwin-amd64 -p addons-572000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.770390872s)
--- PASS: TestAddons/parallel/LocalPath (56.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-szj6m" [cf235445-0701-474d-9f15-974af9601b82] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.011880145s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-572000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-572000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-572000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.78s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-572000
addons_test.go:171: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-572000: (5.238416956s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-572000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-572000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-572000
--- PASS: TestAddons/StoppedEnableDisable (5.78s)

                                                
                                    
x
+
TestCertOptions (40.42s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-326000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-326000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (36.629256675s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-326000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-326000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-326000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-326000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-326000
E1212 12:29:17.094055    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-326000: (3.422275507s)
--- PASS: TestCertOptions (40.42s)

                                                
                                    
x
+
TestCertExpiration (247.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-922000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-922000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (35.147530923s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-922000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-922000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (26.866840593s)
helpers_test.go:175: Cleaning up "cert-expiration-922000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-922000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-922000: (5.273696914s)
--- PASS: TestCertExpiration (247.29s)

                                                
                                    
x
+
TestDockerFlags (47.74s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-212000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-212000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (41.979402034s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-212000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-212000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-212000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-212000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-212000: (5.417763229s)
--- PASS: TestDockerFlags (47.74s)

                                                
                                    
x
+
TestForceSystemdFlag (38.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-966000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-966000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (35.347971142s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-966000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-966000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-966000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-966000: (3.420389351s)
--- PASS: TestForceSystemdFlag (38.97s)

                                                
                                    
x
+
TestForceSystemdEnv (40.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-766000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-766000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (34.815336556s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-766000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-766000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-766000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-766000: (5.29244608s)
--- PASS: TestForceSystemdEnv (40.28s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.49s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.49s)

                                                
                                    
x
+
TestErrorSpam/setup (34.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-536000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-536000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 --driver=hyperkit : (34.223053957s)
--- PASS: TestErrorSpam/setup (34.22s)

                                                
                                    
x
+
TestErrorSpam/start (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 start --dry-run
--- PASS: TestErrorSpam/start (1.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.5s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 status
--- PASS: TestErrorSpam/status (0.50s)

                                                
                                    
x
+
TestErrorSpam/pause (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 pause
--- PASS: TestErrorSpam/pause (1.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 unpause
--- PASS: TestErrorSpam/unpause (1.28s)

                                                
                                    
x
+
TestErrorSpam/stop (3.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 stop: (3.234752007s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-536000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-536000 stop
--- PASS: TestErrorSpam/stop (3.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /Users/jenkins/minikube-integration/17734-1975/.minikube/files/etc/test/nested/copy/3198/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 cache add registry.k8s.io/pause:3.1: (1.353189241s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 cache add registry.k8s.io/pause:3.3: (1.08702627s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3306413091/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 cache add minikube-local-cache-test:functional-303000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 cache delete minikube-local-cache-test:functional-303000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-303000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (76.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-303000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-303000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m16.83259499s)
functional_test.go:757: restart took 1m16.832722569s for "functional-303000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (76.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-303000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 logs: (2.168944319s)
--- PASS: TestFunctional/serial/LogsCmd (2.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3970490322/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3970490322/001/logs.txt: (2.394190305s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-303000 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-303000
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-303000: exit status 115 (276.130728ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.5:30158 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-303000 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-303000 delete -f testdata/invalidsvc.yaml: (1.438770491s)
--- PASS: TestFunctional/serial/InvalidService (4.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 config get cpus: exit status 14 (72.587496ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 config get cpus: exit status 14 (60.51106ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-303000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-303000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 5164: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-303000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-303000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (772.041521ms)

                                                
                                                
-- stdout --
	* [functional-303000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:06:14.593347    5073 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:06:14.593628    5073 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:06:14.593634    5073 out.go:309] Setting ErrFile to fd 2...
	I1212 12:06:14.593638    5073 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:06:14.593824    5073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:06:14.595251    5073 out.go:303] Setting JSON to false
	I1212 12:06:14.618033    5073 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2145,"bootTime":1702409429,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 12:06:14.618123    5073 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 12:06:14.679005    5073 out.go:177] * [functional-303000] minikube v1.32.0 on Darwin 14.2
	I1212 12:06:14.721115    5073 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 12:06:14.700559    5073 notify.go:220] Checking for updates...
	I1212 12:06:14.763105    5073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:06:14.804843    5073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 12:06:14.846147    5073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 12:06:14.887990    5073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:06:14.929946    5073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 12:06:14.987976    5073 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:06:14.988678    5073 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:06:14.988741    5073 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:06:14.997466    5073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50517
	I1212 12:06:14.997831    5073 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:06:14.998272    5073 main.go:141] libmachine: Using API Version  1
	I1212 12:06:14.998281    5073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:06:14.998504    5073 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:06:14.998626    5073 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:06:14.998809    5073 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 12:06:14.999041    5073 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:06:14.999060    5073 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:06:15.007115    5073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50519
	I1212 12:06:15.007472    5073 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:06:15.007823    5073 main.go:141] libmachine: Using API Version  1
	I1212 12:06:15.007842    5073 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:06:15.008110    5073 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:06:15.008235    5073 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:06:15.066255    5073 out.go:177] * Using the hyperkit driver based on existing profile
	I1212 12:06:15.124199    5073 start.go:298] selected driver: hyperkit
	I1212 12:06:15.124225    5073 start.go:902] validating driver "hyperkit" against &{Name:functional-303000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-303000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:06:15.124426    5073 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 12:06:15.187105    5073 out.go:177] 
	W1212 12:06:15.228968    5073 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 12:06:15.250083    5073 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-303000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-303000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-303000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (592.943043ms)

                                                
                                                
-- stdout --
	* [functional-303000] minikube v1.32.0 sur Darwin 14.2
	  - MINIKUBE_LOCATION=17734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 12:06:16.064625    5104 out.go:296] Setting OutFile to fd 1 ...
	I1212 12:06:16.064836    5104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:06:16.064842    5104 out.go:309] Setting ErrFile to fd 2...
	I1212 12:06:16.064846    5104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1212 12:06:16.065061    5104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
	I1212 12:06:16.066719    5104 out.go:303] Setting JSON to false
	I1212 12:06:16.089907    5104 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2147,"bootTime":1702409429,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1212 12:06:16.090012    5104 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1212 12:06:16.112377    5104 out.go:177] * [functional-303000] minikube v1.32.0 sur Darwin 14.2
	I1212 12:06:16.174951    5104 out.go:177]   - MINIKUBE_LOCATION=17734
	I1212 12:06:16.154075    5104 notify.go:220] Checking for updates...
	I1212 12:06:16.216765    5104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	I1212 12:06:16.258950    5104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1212 12:06:16.300966    5104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 12:06:16.342753    5104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	I1212 12:06:16.384885    5104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 12:06:16.406302    5104 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1212 12:06:16.406770    5104 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:06:16.406827    5104 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:06:16.415943    5104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50535
	I1212 12:06:16.416346    5104 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:06:16.416772    5104 main.go:141] libmachine: Using API Version  1
	I1212 12:06:16.416782    5104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:06:16.417034    5104 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:06:16.417152    5104 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:06:16.417374    5104 driver.go:392] Setting default libvirt URI to qemu:///system
	I1212 12:06:16.417614    5104 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1212 12:06:16.417637    5104 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1212 12:06:16.426449    5104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50537
	I1212 12:06:16.427028    5104 main.go:141] libmachine: () Calling .GetVersion
	I1212 12:06:16.427476    5104 main.go:141] libmachine: Using API Version  1
	I1212 12:06:16.427492    5104 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 12:06:16.427730    5104 main.go:141] libmachine: () Calling .GetMachineName
	I1212 12:06:16.427845    5104 main.go:141] libmachine: (functional-303000) Calling .DriverName
	I1212 12:06:16.456965    5104 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I1212 12:06:16.498792    5104 start.go:298] selected driver: hyperkit
	I1212 12:06:16.498820    5104 start.go:902] validating driver "hyperkit" against &{Name:functional-303000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17738/minikube-v1.32.1-1701996673-17738-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702334074-17764@sha256:242468f3f874ac6982f8a024f9c4a97f957667e2ee92ef27b2ae70cc267db401 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-303000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.169.0.5 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1212 12:06:16.499137    5104 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 12:06:16.525898    5104 out.go:177] 
	W1212 12:06:16.546979    5104 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 12:06:16.567809    5104 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-303000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-303000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-cph5v" [3fcb42dd-6c25-4ad7-8c10-a2769c9ba614] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-cph5v" [3fcb42dd-6c25-4ad7-8c10-a2769c9ba614] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.012067586s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.169.0.5:31711
functional_test.go:1674: http://192.169.0.5:31711: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-cph5v

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.5:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.5:31711
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [89de22cc-01ef-4932-8e6c-bf1ebeba5d20] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011263467s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-303000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-303000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-303000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-303000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d7772feb-a916-45e8-a194-2b440b7e7a10] Pending
helpers_test.go:344: "sp-pod" [d7772feb-a916-45e8-a194-2b440b7e7a10] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d7772feb-a916-45e8-a194-2b440b7e7a10] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.016106292s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-303000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-303000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-303000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8f211469-2c5c-4596-a5e3-99cd3f6590c9] Pending
helpers_test.go:344: "sp-pod" [8f211469-2c5c-4596-a5e3-99cd3f6590c9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8f211469-2c5c-4596-a5e3-99cd3f6590c9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.013539783s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-303000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh -n functional-303000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 cp functional-303000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd275077459/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh -n functional-303000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh -n functional-303000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-303000 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-rctzb" [fa76f25e-045a-4d71-9317-26cbecebc375] Pending
E1212 12:05:19.682869    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
helpers_test.go:344: "mysql-859648c796-rctzb" [fa76f25e-045a-4d71-9317-26cbecebc375] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1212 12:05:22.244242    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
helpers_test.go:344: "mysql-859648c796-rctzb" [fa76f25e-045a-4d71-9317-26cbecebc375] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.023402203s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-303000 exec mysql-859648c796-rctzb -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-303000 exec mysql-859648c796-rctzb -- mysql -ppassword -e "show databases;": exit status 1 (177.956313ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-303000 exec mysql-859648c796-rctzb -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-303000 exec mysql-859648c796-rctzb -- mysql -ppassword -e "show databases;": exit status 1 (137.939529ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-303000 exec mysql-859648c796-rctzb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/3198/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "sudo cat /etc/test/nested/copy/3198/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/3198.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "sudo cat /etc/ssl/certs/3198.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/3198.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "sudo cat /usr/share/ca-certificates/3198.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "sudo cat /etc/ssl/certs/51391683.0"
E1212 12:05:18.402690    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
functional_test.go:1998: Checking for existence of /etc/ssl/certs/31982.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "sudo cat /etc/ssl/certs/31982.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/31982.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "sudo cat /usr/share/ca-certificates/31982.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-303000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh "sudo systemctl is-active crio": exit status 1 (209.572099ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-303000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-303000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-303000
docker.io/library/<none>:<none>
docker.io/library/<none>:<none>
docker.io/library/<none>:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-303000 image ls --format short --alsologtostderr:
I1212 12:06:20.791727    5198 out.go:296] Setting OutFile to fd 1 ...
I1212 12:06:20.792009    5198 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:06:20.792014    5198 out.go:309] Setting ErrFile to fd 2...
I1212 12:06:20.792019    5198 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:06:20.792220    5198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
I1212 12:06:20.792870    5198 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:06:20.792961    5198 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:06:20.793320    5198 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 12:06:20.793369    5198 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 12:06:20.801220    5198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50666
I1212 12:06:20.801661    5198 main.go:141] libmachine: () Calling .GetVersion
I1212 12:06:20.802106    5198 main.go:141] libmachine: Using API Version  1
I1212 12:06:20.802116    5198 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 12:06:20.802342    5198 main.go:141] libmachine: () Calling .GetMachineName
I1212 12:06:20.802453    5198 main.go:141] libmachine: (functional-303000) Calling .GetState
I1212 12:06:20.802537    5198 main.go:141] libmachine: (functional-303000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1212 12:06:20.802604    5198 main.go:141] libmachine: (functional-303000) DBG | hyperkit pid from json: 4245
I1212 12:06:20.803964    5198 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 12:06:20.803994    5198 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 12:06:20.811708    5198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50668
I1212 12:06:20.812030    5198 main.go:141] libmachine: () Calling .GetVersion
I1212 12:06:20.812391    5198 main.go:141] libmachine: Using API Version  1
I1212 12:06:20.812405    5198 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 12:06:20.812656    5198 main.go:141] libmachine: () Calling .GetMachineName
I1212 12:06:20.812780    5198 main.go:141] libmachine: (functional-303000) Calling .DriverName
I1212 12:06:20.812931    5198 ssh_runner.go:195] Run: systemctl --version
I1212 12:06:20.812952    5198 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
I1212 12:06:20.813034    5198 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
I1212 12:06:20.813104    5198 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
I1212 12:06:20.813178    5198 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
I1212 12:06:20.813260    5198 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/functional-303000/id_rsa Username:docker}
I1212 12:06:20.852581    5198 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 12:06:20.879429    5198 main.go:141] libmachine: Making call to close driver server
I1212 12:06:20.879439    5198 main.go:141] libmachine: (functional-303000) Calling .Close
I1212 12:06:20.879593    5198 main.go:141] libmachine: Successfully made call to close driver server
I1212 12:06:20.879639    5198 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 12:06:20.879661    5198 main.go:141] libmachine: Making call to close driver server
I1212 12:06:20.879666    5198 main.go:141] libmachine: (functional-303000) Calling .Close
I1212 12:06:20.879665    5198 main.go:141] libmachine: (functional-303000) DBG | Closing plugin on server side
I1212 12:06:20.879802    5198 main.go:141] libmachine: Successfully made call to close driver server
I1212 12:06:20.879806    5198 main.go:141] libmachine: (functional-303000) DBG | Closing plugin on server side
I1212 12:06:20.879812    5198 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-303000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | alpine            | 01e5c69afaf63 | 42.6MB |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/<none>                    | <none>            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/minikube-local-cache-test | functional-303000 | 433c960fb9294 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| gcr.io/google-containers/addon-resizer      | functional-303000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/<none>                    | <none>            | da86e6ba6ca19 | 742kB  |
| docker.io/localhost/my-image                | functional-303000 | 567009db05869 | 1.24MB |
| docker.io/library/mysql                     | 5.7               | bdba757bc9336 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/library/<none>                    | <none>            | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-303000 image ls --format table --alsologtostderr:
I1212 12:06:23.528224    5231 out.go:296] Setting OutFile to fd 1 ...
I1212 12:06:23.528654    5231 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:06:23.528661    5231 out.go:309] Setting ErrFile to fd 2...
I1212 12:06:23.528666    5231 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:06:23.528982    5231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
I1212 12:06:23.529907    5231 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:06:23.530051    5231 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:06:23.530491    5231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 12:06:23.530553    5231 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 12:06:23.541735    5231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50700
I1212 12:06:23.542472    5231 main.go:141] libmachine: () Calling .GetVersion
I1212 12:06:23.543089    5231 main.go:141] libmachine: Using API Version  1
I1212 12:06:23.543102    5231 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 12:06:23.543511    5231 main.go:141] libmachine: () Calling .GetMachineName
I1212 12:06:23.543690    5231 main.go:141] libmachine: (functional-303000) Calling .GetState
I1212 12:06:23.543847    5231 main.go:141] libmachine: (functional-303000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1212 12:06:23.543930    5231 main.go:141] libmachine: (functional-303000) DBG | hyperkit pid from json: 4245
I1212 12:06:23.546289    5231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 12:06:23.546330    5231 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 12:06:23.557323    5231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50702
I1212 12:06:23.557756    5231 main.go:141] libmachine: () Calling .GetVersion
I1212 12:06:23.558244    5231 main.go:141] libmachine: Using API Version  1
I1212 12:06:23.558267    5231 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 12:06:23.558569    5231 main.go:141] libmachine: () Calling .GetMachineName
I1212 12:06:23.558713    5231 main.go:141] libmachine: (functional-303000) Calling .DriverName
I1212 12:06:23.558925    5231 ssh_runner.go:195] Run: systemctl --version
I1212 12:06:23.558948    5231 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
I1212 12:06:23.559049    5231 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
I1212 12:06:23.559175    5231 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
I1212 12:06:23.559285    5231 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
I1212 12:06:23.559399    5231 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/functional-303000/id_rsa Username:docker}
I1212 12:06:23.600477    5231 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 12:06:23.626900    5231 main.go:141] libmachine: Making call to close driver server
I1212 12:06:23.626910    5231 main.go:141] libmachine: (functional-303000) Calling .Close
I1212 12:06:23.627102    5231 main.go:141] libmachine: Successfully made call to close driver server
I1212 12:06:23.627119    5231 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 12:06:23.627147    5231 main.go:141] libmachine: (functional-303000) DBG | Closing plugin on server side
I1212 12:06:23.627164    5231 main.go:141] libmachine: Making call to close driver server
I1212 12:06:23.627173    5231 main.go:141] libmachine: (functional-303000) Calling .Close
I1212 12:06:23.627356    5231 main.go:141] libmachine: Successfully made call to close driver server
I1212 12:06:23.627365    5231 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 12:06:23.627370    5231 main.go:141] libmachine: (functional-303000) DBG | Closing plugin on server side
2023/12/12 12:06:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-303000 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["docker.io/library/\u003cnone\u003e:\u003cnone\u003e"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"115053965e86b2df4d78af78
d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-303000"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"567009db058696e9faaaae7295f6fe99ffc57ce25edac0118875fdf263b52103","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-303000"],"size":"1240000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133ea
a4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["docker.io/library/\u003cnone\u003e:\u003cnone\u003e"],"size":"742000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"433c960fb92948826b2afba9b06ce014a7f75dbd50d92d7ad2718216e88b5cb5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-303000"],"size":"30"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","rep
oDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["docker.io/library/\u003cnone\u003e:\u003cnone\u003e"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-303000 image ls --format json --alsologtostderr:
I1212 12:06:23.346392    5225 out.go:296] Setting OutFile to fd 1 ...
I1212 12:06:23.346723    5225 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:06:23.346730    5225 out.go:309] Setting ErrFile to fd 2...
I1212 12:06:23.346751    5225 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:06:23.346982    5225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
I1212 12:06:23.347831    5225 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:06:23.347967    5225 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:06:23.348367    5225 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 12:06:23.348420    5225 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 12:06:23.356926    5225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50695
I1212 12:06:23.357378    5225 main.go:141] libmachine: () Calling .GetVersion
I1212 12:06:23.357826    5225 main.go:141] libmachine: Using API Version  1
I1212 12:06:23.357838    5225 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 12:06:23.358069    5225 main.go:141] libmachine: () Calling .GetMachineName
I1212 12:06:23.358194    5225 main.go:141] libmachine: (functional-303000) Calling .GetState
I1212 12:06:23.358289    5225 main.go:141] libmachine: (functional-303000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1212 12:06:23.358361    5225 main.go:141] libmachine: (functional-303000) DBG | hyperkit pid from json: 4245
I1212 12:06:23.359811    5225 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 12:06:23.359831    5225 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 12:06:23.367851    5225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50697
I1212 12:06:23.368228    5225 main.go:141] libmachine: () Calling .GetVersion
I1212 12:06:23.368596    5225 main.go:141] libmachine: Using API Version  1
I1212 12:06:23.368607    5225 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 12:06:23.368830    5225 main.go:141] libmachine: () Calling .GetMachineName
I1212 12:06:23.368940    5225 main.go:141] libmachine: (functional-303000) Calling .DriverName
I1212 12:06:23.369097    5225 ssh_runner.go:195] Run: systemctl --version
I1212 12:06:23.369118    5225 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
I1212 12:06:23.369212    5225 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
I1212 12:06:23.369297    5225 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
I1212 12:06:23.369395    5225 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
I1212 12:06:23.369508    5225 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/functional-303000/id_rsa Username:docker}
I1212 12:06:23.414455    5225 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 12:06:23.443147    5225 main.go:141] libmachine: Making call to close driver server
I1212 12:06:23.443172    5225 main.go:141] libmachine: (functional-303000) Calling .Close
I1212 12:06:23.443353    5225 main.go:141] libmachine: Successfully made call to close driver server
I1212 12:06:23.443356    5225 main.go:141] libmachine: (functional-303000) DBG | Closing plugin on server side
I1212 12:06:23.443368    5225 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 12:06:23.443385    5225 main.go:141] libmachine: Making call to close driver server
I1212 12:06:23.443394    5225 main.go:141] libmachine: (functional-303000) Calling .Close
I1212 12:06:23.443610    5225 main.go:141] libmachine: (functional-303000) DBG | Closing plugin on server side
I1212 12:06:23.443611    5225 main.go:141] libmachine: Successfully made call to close driver server
I1212 12:06:23.443652    5225 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-303000 image ls --format yaml --alsologtostderr:
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- docker.io/library/<none>:<none>
size: "240000"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-303000
size: "32900000"
- id: 433c960fb92948826b2afba9b06ce014a7f75dbd50d92d7ad2718216e88b5cb5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-303000
size: "30"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- docker.io/library/<none>:<none>
size: "742000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- docker.io/library/<none>:<none>
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-303000 image ls --format yaml --alsologtostderr:
I1212 12:06:20.961814    5204 out.go:296] Setting OutFile to fd 1 ...
I1212 12:06:20.962032    5204 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:06:20.962038    5204 out.go:309] Setting ErrFile to fd 2...
I1212 12:06:20.962042    5204 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:06:20.962221    5204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
I1212 12:06:20.962925    5204 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:06:20.963020    5204 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:06:20.963383    5204 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 12:06:20.963436    5204 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 12:06:20.971162    5204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50671
I1212 12:06:20.971586    5204 main.go:141] libmachine: () Calling .GetVersion
I1212 12:06:20.972005    5204 main.go:141] libmachine: Using API Version  1
I1212 12:06:20.972014    5204 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 12:06:20.972240    5204 main.go:141] libmachine: () Calling .GetMachineName
I1212 12:06:20.972347    5204 main.go:141] libmachine: (functional-303000) Calling .GetState
I1212 12:06:20.972433    5204 main.go:141] libmachine: (functional-303000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1212 12:06:20.972506    5204 main.go:141] libmachine: (functional-303000) DBG | hyperkit pid from json: 4245
I1212 12:06:20.973916    5204 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 12:06:20.973943    5204 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 12:06:20.982037    5204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50673
I1212 12:06:20.982393    5204 main.go:141] libmachine: () Calling .GetVersion
I1212 12:06:20.982733    5204 main.go:141] libmachine: Using API Version  1
I1212 12:06:20.982744    5204 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 12:06:20.982972    5204 main.go:141] libmachine: () Calling .GetMachineName
I1212 12:06:20.983076    5204 main.go:141] libmachine: (functional-303000) Calling .DriverName
I1212 12:06:20.983228    5204 ssh_runner.go:195] Run: systemctl --version
I1212 12:06:20.983249    5204 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
I1212 12:06:20.983345    5204 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
I1212 12:06:20.983434    5204 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
I1212 12:06:20.983534    5204 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
I1212 12:06:20.983615    5204 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/functional-303000/id_rsa Username:docker}
I1212 12:06:21.030658    5204 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1212 12:06:21.047582    5204 main.go:141] libmachine: Making call to close driver server
I1212 12:06:21.047592    5204 main.go:141] libmachine: (functional-303000) Calling .Close
I1212 12:06:21.047758    5204 main.go:141] libmachine: Successfully made call to close driver server
I1212 12:06:21.047767    5204 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 12:06:21.047774    5204 main.go:141] libmachine: Making call to close driver server
I1212 12:06:21.047777    5204 main.go:141] libmachine: (functional-303000) DBG | Closing plugin on server side
I1212 12:06:21.047782    5204 main.go:141] libmachine: (functional-303000) Calling .Close
I1212 12:06:21.047918    5204 main.go:141] libmachine: Successfully made call to close driver server
I1212 12:06:21.047929    5204 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 12:06:21.047937    5204 main.go:141] libmachine: (functional-303000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh pgrep buildkitd: exit status 1 (156.618812ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image build -t localhost/my-image:functional-303000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 image build -t localhost/my-image:functional-303000 testdata/build --alsologtostderr: (1.879578388s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-303000 image build -t localhost/my-image:functional-303000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d31991834e3e
Removing intermediate container d31991834e3e
---> cecd28987b3c
Step 3/3 : ADD content.txt /
---> 567009db0586
Successfully built 567009db0586
Successfully tagged localhost/my-image:functional-303000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-303000 image build -t localhost/my-image:functional-303000 testdata/build --alsologtostderr:
I1212 12:06:21.292101    5213 out.go:296] Setting OutFile to fd 1 ...
I1212 12:06:21.292540    5213 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:06:21.292547    5213 out.go:309] Setting ErrFile to fd 2...
I1212 12:06:21.292551    5213 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1212 12:06:21.292785    5213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17734-1975/.minikube/bin
I1212 12:06:21.293498    5213 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:06:21.294290    5213 config.go:182] Loaded profile config "functional-303000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1212 12:06:21.294801    5213 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 12:06:21.294858    5213 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 12:06:21.304106    5213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50683
I1212 12:06:21.304658    5213 main.go:141] libmachine: () Calling .GetVersion
I1212 12:06:21.305134    5213 main.go:141] libmachine: Using API Version  1
I1212 12:06:21.305147    5213 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 12:06:21.305443    5213 main.go:141] libmachine: () Calling .GetMachineName
I1212 12:06:21.305583    5213 main.go:141] libmachine: (functional-303000) Calling .GetState
I1212 12:06:21.305687    5213 main.go:141] libmachine: (functional-303000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1212 12:06:21.305765    5213 main.go:141] libmachine: (functional-303000) DBG | hyperkit pid from json: 4245
I1212 12:06:21.307279    5213 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1212 12:06:21.307329    5213 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1212 12:06:21.315703    5213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50685
I1212 12:06:21.316073    5213 main.go:141] libmachine: () Calling .GetVersion
I1212 12:06:21.316401    5213 main.go:141] libmachine: Using API Version  1
I1212 12:06:21.316411    5213 main.go:141] libmachine: () Calling .SetConfigRaw
I1212 12:06:21.316611    5213 main.go:141] libmachine: () Calling .GetMachineName
I1212 12:06:21.316708    5213 main.go:141] libmachine: (functional-303000) Calling .DriverName
I1212 12:06:21.316862    5213 ssh_runner.go:195] Run: systemctl --version
I1212 12:06:21.316881    5213 main.go:141] libmachine: (functional-303000) Calling .GetSSHHostname
I1212 12:06:21.316950    5213 main.go:141] libmachine: (functional-303000) Calling .GetSSHPort
I1212 12:06:21.317032    5213 main.go:141] libmachine: (functional-303000) Calling .GetSSHKeyPath
I1212 12:06:21.317125    5213 main.go:141] libmachine: (functional-303000) Calling .GetSSHUsername
I1212 12:06:21.317208    5213 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17734-1975/.minikube/machines/functional-303000/id_rsa Username:docker}
I1212 12:06:21.358370    5213 build_images.go:151] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2765519484.tar
I1212 12:06:21.358442    5213 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 12:06:21.368591    5213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2765519484.tar
I1212 12:06:21.371544    5213 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2765519484.tar: stat -c "%s %y" /var/lib/minikube/build/build.2765519484.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2765519484.tar': No such file or directory
I1212 12:06:21.371573    5213 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2765519484.tar --> /var/lib/minikube/build/build.2765519484.tar (3072 bytes)
I1212 12:06:21.396537    5213 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2765519484
I1212 12:06:21.412663    5213 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2765519484 -xf /var/lib/minikube/build/build.2765519484.tar
I1212 12:06:21.420620    5213 docker.go:346] Building image: /var/lib/minikube/build/build.2765519484
I1212 12:06:21.420682    5213 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-303000 /var/lib/minikube/build/build.2765519484
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1212 12:06:23.057484    5213 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-303000 /var/lib/minikube/build/build.2765519484: (1.636842227s)
I1212 12:06:23.057543    5213 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2765519484
I1212 12:06:23.064500    5213 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2765519484.tar
I1212 12:06:23.071029    5213 build_images.go:207] Built localhost/my-image:functional-303000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2765519484.tar
I1212 12:06:23.071048    5213 build_images.go:123] succeeded building to: functional-303000
I1212 12:06:23.071052    5213 build_images.go:124] failed building to: 
I1212 12:06:23.071071    5213 main.go:141] libmachine: Making call to close driver server
I1212 12:06:23.071077    5213 main.go:141] libmachine: (functional-303000) Calling .Close
I1212 12:06:23.071236    5213 main.go:141] libmachine: Successfully made call to close driver server
I1212 12:06:23.071245    5213 main.go:141] libmachine: Making call to close connection to plugin binary
I1212 12:06:23.071251    5213 main.go:141] libmachine: Making call to close driver server
I1212 12:06:23.071255    5213 main.go:141] libmachine: (functional-303000) DBG | Closing plugin on server side
I1212 12:06:23.071260    5213 main.go:141] libmachine: (functional-303000) Calling .Close
I1212 12:06:23.071419    5213 main.go:141] libmachine: (functional-303000) DBG | Closing plugin on server side
I1212 12:06:23.071467    5213 main.go:141] libmachine: Successfully made call to close driver server
I1212 12:06:23.071486    5213 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.409208284s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-303000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-303000 docker-env) && out/minikube-darwin-amd64 status -p functional-303000"
E1212 12:05:17.124948    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:05:17.130902    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:05:17.141283    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:05:17.161495    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:05:17.201689    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:05:17.281814    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:05:17.442035    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-303000 docker-env) && docker images"
E1212 12:05:17.762310    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image load --daemon gcr.io/google-containers/addon-resizer:functional-303000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 image load --daemon gcr.io/google-containers/addon-resizer:functional-303000 --alsologtostderr: (3.072089392s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image load --daemon gcr.io/google-containers/addon-resizer:functional-303000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 image load --daemon gcr.io/google-containers/addon-resizer:functional-303000 --alsologtostderr: (2.091966862s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.028704452s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-303000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image load --daemon gcr.io/google-containers/addon-resizer:functional-303000 --alsologtostderr
E1212 12:05:27.364984    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 image load --daemon gcr.io/google-containers/addon-resizer:functional-303000 --alsologtostderr: (3.167022811s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image save gcr.io/google-containers/addon-resizer:functional-303000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 image save gcr.io/google-containers/addon-resizer:functional-303000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.217426257s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image rm gcr.io/google-containers/addon-resizer:functional-303000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.222489586s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-303000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 image save --daemon gcr.io/google-containers/addon-resizer:functional-303000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-303000 image save --daemon gcr.io/google-containers/addon-resizer:functional-303000 --alsologtostderr: (1.213399048s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-303000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-303000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-303000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-c9h5k" [ffb0be3c-e245-40da-a951-06dc2b0b9e32] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1212 12:05:37.604915    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-d7447cc7f-c9h5k" [ffb0be3c-e245-40da-a951-06dc2b0b9e32] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.00922868s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-303000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-303000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-303000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-303000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4862: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-303000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 service list -o json
functional_test.go:1493: Took "428.552105ms" to run "out/minikube-darwin-amd64 -p functional-303000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-303000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f3b3f2d8-22e9-441c-b514-f7f5e82a96e6] Pending
helpers_test.go:344: "nginx-svc" [f3b3f2d8-22e9-441c-b514-f7f5e82a96e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f3b3f2d8-22e9-441c-b514-f7f5e82a96e6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.012459978s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.169.0.5:32148
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.169.0.5:32148
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-303000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E1212 12:05:58.084388    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.93.121 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-303000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "216.024433ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "78.489625ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "202.772458ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "77.969848ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3283324878/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702411568707692000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3283324878/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702411568707692000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3283324878/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702411568707692000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3283324878/001/test-1702411568707692000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (155.230157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 20:06 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 20:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 20:06 test-1702411568707692000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh cat /mount-9p/test-1702411568707692000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-303000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c24f2be7-ceef-4a98-b31c-e401be325570] Pending
helpers_test.go:344: "busybox-mount" [c24f2be7-ceef-4a98-b31c-e401be325570] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c24f2be7-ceef-4a98-b31c-e401be325570] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c24f2be7-ceef-4a98-b31c-e401be325570] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.01023651s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-303000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3283324878/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2766531209/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.616293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (197.971942ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2766531209/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh "sudo umount -f /mount-9p": exit status 1 (151.264974ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-303000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2766531209/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3892218663/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3892218663/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3892218663/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T" /mount1: exit status 1 (190.633145ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T" /mount1: exit status 1 (269.92873ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-303000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-303000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3892218663/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3892218663/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-303000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3892218663/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-303000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-303000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-303000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-419000 --driver=hyperkit 
E1212 12:06:39.043260    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-419000 --driver=hyperkit : (37.83684482s)
--- PASS: TestImageBuild/serial/Setup (37.84s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-419000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-419000: (1.277224563s)
--- PASS: TestImageBuild/serial/NormalBuild (1.28s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-419000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-419000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-419000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (102.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-649000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E1212 12:08:00.962330    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-649000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m42.484677283s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (102.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-649000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-649000 addons enable ingress --alsologtostderr -v=5: (14.392461909s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-649000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.17s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-649000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-649000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.998708463s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-649000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-649000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0f49abb1-8df9-4f68-8be3-15020aaed46b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0f49abb1-8df9-4f68-8be3-15020aaed46b] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.018594233s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-649000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-649000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-649000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.169.0.7
addons_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-649000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-649000 addons disable ingress-dns --alsologtostderr -v=1: (12.827860157s)
addons_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-649000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-649000 addons disable ingress --alsologtostderr -v=1: (7.296408943s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.17s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-012000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E1212 12:10:17.114757    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:10:19.617331    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:19.622755    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:19.632884    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:19.653988    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:19.694122    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:19.775665    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:19.937058    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:20.257381    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:20.899003    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:22.180822    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:24.740869    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:29.860888    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:40.101572    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:10:44.797003    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-012000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (48.984277754s)
--- PASS: TestJSONOutput/start/Command (48.98s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-012000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-012000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-012000 --output=json --user=testUser
E1212 12:11:00.581247    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-012000 --output=json --user=testUser: (8.163803922s)
--- PASS: TestJSONOutput/stop/Command (8.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.9s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-349000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-349000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (467.000575ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"30bf56b4-852e-4c87-8a6b-d15c173fa2d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-349000] minikube v1.32.0 on Darwin 14.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe591cb1-c406-4698-acb2-9b9098ea9b55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17734"}}
	{"specversion":"1.0","id":"7291bd57-7436-4913-b6e5-755877fb55b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig"}}
	{"specversion":"1.0","id":"e829f085-6707-4bd6-999d-3f08ecd13d8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"5424eb71-3cf0-485a-9d8c-fccd9d63a1f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e05641fd-2abc-48fa-ae85-5c3cb0646c2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube"}}
	{"specversion":"1.0","id":"ae97b819-1448-43bc-8ec0-9bd4e1299e56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a96c7e24-bf40-442a-adaa-f13ff9f609dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-349000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-349000
--- PASS: TestErrorJSONOutput (0.90s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (87.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-161000 --driver=hyperkit 
E1212 12:11:41.541452    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-161000 --driver=hyperkit : (38.886411672s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-163000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-163000 --driver=hyperkit : (39.442291815s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-161000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-163000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-163000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-163000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-163000: (3.449849037s)
helpers_test.go:175: Cleaning up "first-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-161000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-161000: (5.269764691s)
--- PASS: TestMinikubeProfile (87.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (16.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-750000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-750000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.75198312s)
--- PASS: TestMountStart/serial/StartWithMountFirst (16.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-750000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-750000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (15.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-762000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E1212 12:13:03.458908    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-762000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (14.891580922s)
--- PASS: TestMountStart/serial/StartWithMountSecond (15.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-762000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-762000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.38s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-750000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-750000 --alsologtostderr -v=5: (2.380892583s)
--- PASS: TestMountStart/serial/DeleteFirst (2.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-762000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-762000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-762000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-762000: (2.226614591s)
--- PASS: TestMountStart/serial/Stop (2.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (40.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-762000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-762000: (39.248391337s)
--- PASS: TestMountStart/serial/RestartStopped (40.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-762000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-762000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (61.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-675000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-675000
multinode_test.go:318: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-675000: (2.235986023s)
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-675000 --wait=true -v=8 --alsologtostderr
E1212 12:17:00.956382    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-675000 --wait=true -v=8 --alsologtostderr: (59.361476703s)
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-675000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (61.71s)

                                                
                                    
x
+
TestPreload (193.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-292000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E1212 12:20:17.195946    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:20:19.698940    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:21:40.239434    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-292000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m57.465699451s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-292000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-292000 image pull gcr.io/k8s-minikube/busybox: (1.158962319s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-292000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-292000: (8.292329178s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-292000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-292000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m1.619770446s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-292000 image list
helpers_test.go:175: Cleaning up "test-preload-292000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-292000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-292000: (5.271717506s)
--- PASS: TestPreload (193.96s)

                                                
                                    
x
+
TestScheduledStopUnix (105.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-780000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-780000 --memory=2048 --driver=hyperkit : (34.000626967s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-780000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-780000 -n scheduled-stop-780000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-780000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-780000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-780000 -n scheduled-stop-780000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-780000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-780000 --schedule 15s
E1212 12:24:17.099817    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-780000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-780000: exit status 7 (69.079478ms)

                                                
                                                
-- stdout --
	scheduled-stop-780000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-780000 -n scheduled-stop-780000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-780000 -n scheduled-stop-780000: exit status 7 (70.248744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-780000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-780000
--- PASS: TestScheduledStopUnix (105.56s)

                                                
                                    
x
+
TestSkaffold (110.85s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe89286156 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-812000 --memory=2600 --driver=hyperkit 
E1212 12:25:17.191789    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:25:19.695764    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-812000 --memory=2600 --driver=hyperkit : (34.815516231s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe89286156 run --minikube-profile skaffold-812000 --kube-context skaffold-812000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe89286156 run --minikube-profile skaffold-812000 --kube-context skaffold-812000 --status-check=true --port-forward=false --interactive=false: (56.879308873s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-657545699d-zlm4w" [e8bad7c0-c1b1-4915-a842-a354715b2835] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012610449s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7555d8f5dc-gxpqf" [6e46c510-79d5-4dca-aa45-f6fa3768a231] Running
E1212 12:26:42.743635    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007756265s
helpers_test.go:175: Cleaning up "skaffold-812000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-812000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-812000: (5.292388362s)
--- PASS: TestSkaffold (110.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (172.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.272374275.exe start -p running-upgrade-331000 --memory=2200 --vm-driver=hyperkit 
E1212 12:30:17.187375    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:30:19.690128    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:30:40.145938    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.272374275.exe start -p running-upgrade-331000 --memory=2200 --vm-driver=hyperkit : (1m34.145487022s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-331000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1212 12:31:32.756512    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:32.762541    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:32.772948    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:32.794200    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:32.835808    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:32.917712    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:33.078482    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:33.398572    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:34.039971    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:35.320340    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:37.881586    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:43.002590    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:31:53.243286    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-331000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m11.614097767s)
helpers_test.go:175: Cleaning up "running-upgrade-331000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-331000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-331000: (5.28605496s)
--- PASS: TestRunningBinaryUpgrade (172.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (142.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
E1212 12:32:13.724352    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m10.675055158s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-304000
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-304000: (8.265067961s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-304000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-304000 status --format={{.Host}}: exit status 7 (69.115617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit : (32.766452394s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-304000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (500.713071ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-304000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-304000
	    minikube start -p kubernetes-upgrade-304000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3040002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-304000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-304000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit : (24.46313029s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-304000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-304000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-304000: (5.298290659s)
--- PASS: TestKubernetesUpgrade (142.09s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.55s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17734
- KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3707776985/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3707776985/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3707776985/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3707776985/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.55s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.75s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17734
- KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2252413280/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2252413280/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2252413280/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2252413280/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (156.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.4045513251.exe start -p stopped-upgrade-392000 --memory=2200 --vm-driver=hyperkit 
E1212 12:32:54.684030    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.4045513251.exe start -p stopped-upgrade-392000 --memory=2200 --vm-driver=hyperkit : (1m28.265827829s)
version_upgrade_test.go:205: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.4045513251.exe -p stopped-upgrade-392000 stop
version_upgrade_test.go:205: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.4045513251.exe -p stopped-upgrade-392000 stop: (8.097413173s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-392000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1212 12:34:16.603086    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:34:17.091276    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-392000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m0.568357445s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (156.93s)

                                                
                                    
x
+
TestPause/serial/Start (49.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-472000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-472000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (49.119740597s)
--- PASS: TestPause/serial/Start (49.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-392000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-392000: (3.00039627s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-814000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-814000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (399.390392ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-814000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17734
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17734-1975/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17734-1975/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-814000 --driver=hyperkit 
E1212 12:35:17.182671    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:35:19.685805    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-814000 --driver=hyperkit : (36.790071464s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-814000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.25s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-472000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-472000 --alsologtostderr -v=1 --driver=hyperkit : (41.236688152s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-814000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-814000 --no-kubernetes --driver=hyperkit : (13.832478334s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-814000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-814000 status -o json: exit status 2 (147.984633ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-814000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-814000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-814000: (2.633192305s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.61s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-472000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-472000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-472000 --output=json --layout=cluster: exit status 2 (166.954963ms)

                                                
                                                
-- stdout --
	{"Name":"pause-472000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-472000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.17s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-472000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.53s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.64s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-472000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.64s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.28s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-472000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-472000 --alsologtostderr -v=5: (5.275916237s)
--- PASS: TestPause/serial/DeletePaused (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (15.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-814000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-814000 --no-kubernetes --driver=hyperkit : (15.856560856s)
--- PASS: TestNoKubernetes/serial/Start (15.86s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m0.216554358s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-814000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-814000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (136.62755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-814000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-814000: (2.242544447s)
--- PASS: TestNoKubernetes/serial/Stop (2.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-814000 --driver=hyperkit 
E1212 12:36:32.751411    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-814000 --driver=hyperkit : (21.192834186s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-814000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-814000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (133.368693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E1212 12:37:00.441453    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (59.561728277s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-183000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-183000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hbp4d" [308142a7-8936-4a22-8c2e-b4580f4c3ff6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hbp4d" [308142a7-8936-4a22-8c2e-b4580f4c3ff6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.010002685s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-183000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-l57zj" [5a1c78bc-8bab-46ab-86c4-28f8f167ca52] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.01342869s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-183000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-183000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-49qc4" [4ccfb337-1165-4e4f-9105-9e390bdeecd7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-49qc4" [4ccfb337-1165-4e4f-9105-9e390bdeecd7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.007342563s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-183000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (1m30.092648198s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (49.022661805s)
--- PASS: TestNetworkPlugins/group/bridge/Start (49.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-183000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-183000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n5k8g" [bb6ba3c6-9ca5-47f9-a030-c81717e5fd6c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 12:39:17.086399    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-n5k8g" [bb6ba3c6-9ca5-47f9-a030-c81717e5fd6c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.008264078s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-183000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-183000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-183000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-84fp5" [79f04ddc-8bf9-4d2d-9fb5-4c1ae6b9bdd4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-84fp5" [79f04ddc-8bf9-4d2d-9fb5-4c1ae6b9bdd4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.007774604s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (48.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (48.340948067s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (48.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-183000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
E1212 12:40:17.178148    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:40:19.681275    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (59.079164909s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-183000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-183000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mxwp4" [897a3ae4-ee8b-45cd-a0e9-391e34f7f6a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mxwp4" [897a3ae4-ee8b-45cd-a0e9-391e34f7f6a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.007971113s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (32.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-183000 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-183000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.116567768s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context kubenet-183000 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-183000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121252246s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context kubenet-183000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (32.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-183000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-183000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tdrl5" [375645ac-4524-4985-8b88-c22b2b6ff4d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tdrl5" [375645ac-4524-4985-8b88-c22b2b6ff4d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.008185724s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-183000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m11.979951127s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (59.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E1212 12:42:12.966985    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:12.972514    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:12.984419    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:13.005829    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:13.046917    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:13.127529    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:13.288828    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:13.609399    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:14.250156    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:15.531164    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:18.092846    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:23.214602    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:42:33.454619    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-183000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (59.820278188s)
--- PASS: TestNetworkPlugins/group/false/Start (59.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-183000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-183000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2qhrl" [58d4c840-6b36-4f4b-8fb6-52862aae0032] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2qhrl" [58d4c840-6b36-4f4b-8fb6-52862aae0032] Running
E1212 12:42:49.105777    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:42:49.112116    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:42:49.123297    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:42:49.144944    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:42:49.185053    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:42:49.266611    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:42:49.427985    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:42:49.748129    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:42:50.390185    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:42:51.670854    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.007865994s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d7rlz" [9c6e7cfa-27e4-4287-b1e1-ccae56d4c80b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.015059818s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-183000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-183000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-183000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-drdll" [e5bc1777-03c3-4f80-9256-953ac3335ef1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-drdll" [e5bc1777-03c3-4f80-9256-953ac3335ef1] Running
E1212 12:42:59.354108    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.009628653s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-183000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-183000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)
E1212 12:58:36.096016    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:59:12.234517    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:59:14.869297    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:59:17.156778    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:59:43.245554    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:59:47.492566    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 13:00:02.805931    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 13:00:15.182519    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 13:00:17.249746    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 13:00:19.753682    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 13:00:34.177303    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (152.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-608000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-608000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m32.926996048s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (152.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (84.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-581000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1212 12:43:22.730781    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:43:30.075968    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:43:34.895431    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:44:11.036807    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:44:14.792635    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:14.798964    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:14.809253    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:14.829353    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:14.870884    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:14.952122    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:15.113643    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:15.434312    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:16.076445    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:17.081620    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:44:17.356910    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:19.917682    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:25.037904    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:35.278181    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:44:43.169178    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:44:43.174532    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:44:43.185472    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:44:43.206201    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:44:43.247057    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:44:43.328612    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:44:43.490698    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:44:43.812341    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:44:44.453612    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:44:45.733778    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-581000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (1m24.990154788s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (84.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-581000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fcc69803-ec84-4c84-8e20-82a7eded6125] Pending
helpers_test.go:344: "busybox" [fcc69803-ec84-4c84-8e20-82a7eded6125] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 12:44:48.294066    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [fcc69803-ec84-4c84-8e20-82a7eded6125] Running
E1212 12:44:53.502206    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.016046061s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-581000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-581000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1212 12:44:55.845048    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-581000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-581000 --alsologtostderr -v=3
E1212 12:44:56.902078    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:45:03.742420    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-581000 --alsologtostderr -v=3: (8.263361669s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000: exit status 7 (67.886393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-581000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (302.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-581000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1212 12:45:17.260899    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:45:19.764179    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
E1212 12:45:24.246962    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:45:33.042448    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:45:34.187871    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:45:34.193053    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:45:34.203292    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:45:34.225220    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:45:34.265488    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:45:34.346236    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:45:34.506429    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:45:34.826716    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:45:35.466883    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:45:36.747178    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:45:36.805451    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:45:39.307481    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-581000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (5m2.327567728s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-581000 -n no-preload-581000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (302.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-608000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e31acbc3-3b9e-4263-91a6-23c09a4e5196] Pending
helpers_test.go:344: "busybox" [e31acbc3-3b9e-4263-91a6-23c09a4e5196] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 12:45:44.429376    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e31acbc3-3b9e-4263-91a6-23c09a4e5196] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.017168853s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-608000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-608000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-608000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-608000 --alsologtostderr -v=3
E1212 12:45:54.669554    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-608000 --alsologtostderr -v=3: (8.300186861s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-608000 -n old-k8s-version-608000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-608000 -n old-k8s-version-608000: exit status 7 (68.110706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-608000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (487.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-608000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E1212 12:46:05.206957    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:46:11.500296    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:11.505499    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:11.515612    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:11.537478    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:11.577839    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:11.658131    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:11.818273    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:12.139070    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:12.780204    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:14.061990    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:15.149899    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:46:16.623806    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:21.743853    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:31.984053    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:32.830037    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:46:52.464068    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:46:56.109607    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:46:58.725115    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:47:13.048948    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:47:20.218752    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:47:27.127397    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:47:33.424160    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:47:40.741987    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:47:40.894707    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:40.899869    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:40.910774    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:40.930900    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:40.971219    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:41.053316    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:41.214088    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:41.534226    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:42.174497    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:43.454664    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:46.014901    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:47.863309    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:47.868530    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:47.878923    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:47.900989    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:47.941983    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:48.024102    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:48.185431    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:48.506389    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:49.148657    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:49.187206    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:47:50.430051    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:51.136371    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:47:52.990961    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:47:55.879606    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
E1212 12:47:58.111198    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:48:01.376726    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:48:08.353031    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:48:16.880918    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:48:18.029049    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:48:21.856681    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:48:28.834013    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:48:55.343764    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:49:02.817478    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:49:09.795473    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:49:14.876298    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:49:17.163832    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
E1212 12:49:42.604873    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:49:43.251143    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-608000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (8m7.126183345s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-608000 -n old-k8s-version-608000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (487.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-67ftv" [2cb31da2-4cb7-4351-b821-41d5616249ad] Running
E1212 12:50:10.966443    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013479127s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-67ftv" [2cb31da2-4cb7-4351-b821-41d5616249ad] Running
E1212 12:50:17.257284    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008369736s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-581000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-581000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-581000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-581000 -n no-preload-581000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-581000 -n no-preload-581000: exit status 2 (162.250435ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-581000 -n no-preload-581000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-581000 -n no-preload-581000: exit status 2 (163.304944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-581000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-581000 -n no-preload-581000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-581000 -n no-preload-581000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-404000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4
E1212 12:50:31.714844    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:50:34.185267    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:51:01.867383    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
E1212 12:51:11.497883    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-404000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4: (50.204504992s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-404000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e61e8115-f9b9-4a3a-aae3-82cc31363e64] Pending
helpers_test.go:344: "busybox" [e61e8115-f9b9-4a3a-aae3-82cc31363e64] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e61e8115-f9b9-4a3a-aae3-82cc31363e64] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.01588694s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-404000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-404000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-404000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-404000 --alsologtostderr -v=3
E1212 12:51:32.826407    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-404000 --alsologtostderr -v=3: (8.29460139s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000: exit status 7 (68.460958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-404000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-404000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4
E1212 12:51:39.183146    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:52:13.045583    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:52:40.892137    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:52:47.859561    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:52:49.183263    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
E1212 12:53:08.577514    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
E1212 12:53:15.555123    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-404000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4: (4m58.342442085s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-404000 -n embed-certs-404000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-tmlnn" [11e95036-fde8-4404-8398-cb9ca4f5be63] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011312043s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-tmlnn" [11e95036-fde8-4404-8398-cb9ca4f5be63] Running
E1212 12:54:14.873150    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
E1212 12:54:17.159816    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/ingress-addon-legacy-649000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007409052s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-608000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-608000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-608000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-608000 -n old-k8s-version-608000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-608000 -n old-k8s-version-608000: exit status 2 (160.30153ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-608000 -n old-k8s-version-608000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-608000 -n old-k8s-version-608000: exit status 2 (160.101495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-608000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-608000 -n old-k8s-version-608000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-608000 -n old-k8s-version-608000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-342000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4
E1212 12:54:43.249135    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/enable-default-cni-183000/client.crt: no such file or directory
E1212 12:54:47.495443    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:47.500579    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:47.512413    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:47.532630    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:47.573619    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:47.654682    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:47.814807    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:48.136049    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:48.777195    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:50.057310    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:52.618484    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:54:57.739694    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:55:00.299937    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
E1212 12:55:07.980885    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:55:17.254365    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/addons-572000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-342000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4: (51.214039504s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-342000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [81d05b0a-a7c7-4a8d-af76-6b2a57ea90c2] Pending
helpers_test.go:344: "busybox" [81d05b0a-a7c7-4a8d-af76-6b2a57ea90c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1212 12:55:19.757794    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/functional-303000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [81d05b0a-a7c7-4a8d-af76-6b2a57ea90c2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.016048084s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-342000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-342000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-342000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-342000 --alsologtostderr -v=3
E1212 12:55:28.460864    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:55:34.180668    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/kubenet-183000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-342000 --alsologtostderr -v=3: (8.2735191s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-342000 -n default-k8s-diff-port-342000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-342000 -n default-k8s-diff-port-342000: exit status 7 (69.03957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-342000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-342000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4
E1212 12:55:43.847355    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:43.853427    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:43.865277    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:43.885868    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:43.927384    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:44.008301    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:44.169756    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:44.490113    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:45.130984    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:46.411351    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:48.972262    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:55:54.093980    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:56:04.335327    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:56:09.422187    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
E1212 12:56:11.493420    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/custom-flannel-183000/client.crt: no such file or directory
E1212 12:56:24.815371    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-342000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4: (4m58.734832949s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-342000 -n default-k8s-diff-port-342000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sfcwq" [a1dc9f8c-e845-46d8-8ff0-78c8ebfa24a1] Running
E1212 12:56:32.823436    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/skaffold-812000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011989035s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sfcwq" [a1dc9f8c-e845-46d8-8ff0-78c8ebfa24a1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007197927s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-404000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-404000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-404000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-404000 -n embed-certs-404000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-404000 -n embed-certs-404000: exit status 2 (162.561425ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-404000 -n embed-certs-404000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-404000 -n embed-certs-404000: exit status 2 (162.012867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-404000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-404000 -n embed-certs-404000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-404000 -n embed-certs-404000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-187000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1212 12:57:05.775980    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
E1212 12:57:13.043655    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/auto-183000/client.crt: no such file or directory
E1212 12:57:31.342858    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/no-preload-581000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-187000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (46.234631852s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-187000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-187000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.221102996s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-187000 --alsologtostderr -v=3
E1212 12:57:40.888858    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/false-183000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-187000 --alsologtostderr -v=3: (8.25902515s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-187000 -n newest-cni-187000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-187000 -n newest-cni-187000: exit status 7 (68.34683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-187000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-187000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1212 12:57:47.856701    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/calico-183000/client.crt: no such file or directory
E1212 12:57:49.179561    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/flannel-183000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-187000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (36.973730509s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-187000 -n newest-cni-187000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-187000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-187000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-187000 -n newest-cni-187000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-187000 -n newest-cni-187000: exit status 2 (156.389666ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-187000 -n newest-cni-187000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-187000 -n newest-cni-187000: exit status 2 (158.12019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-187000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-187000 -n newest-cni-187000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-187000 -n newest-cni-187000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-skjpz" [0a7de0d4-1590-4e26-abd5-a9c482b248b2] Running
E1212 13:00:37.958868    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/bridge-183000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013436704s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-skjpz" [0a7de0d4-1590-4e26-abd5-a9c482b248b2] Running
E1212 13:00:43.845147    3198 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/old-k8s-version-608000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006113739s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-342000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-342000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-342000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-342000 -n default-k8s-diff-port-342000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-342000 -n default-k8s-diff-port-342000: exit status 2 (158.500895ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-342000 -n default-k8s-diff-port-342000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-342000 -n default-k8s-diff-port-342000: exit status 2 (156.943811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-342000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-342000 -n default-k8s-diff-port-342000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-342000 -n default-k8s-diff-port-342000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.88s)

                                                
                                    

Test skip (22/323)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-183000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-183000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/jenkins/minikube-integration/17734-1975/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Dec 2023 12:19:01 PST
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.169.0.14:8443
name: multinode-675000-m01
contexts:
- context:
cluster: multinode-675000-m01
extensions:
- extension:
last-update: Tue, 12 Dec 2023 12:19:01 PST
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: multinode-675000-m01
name: multinode-675000-m01
current-context: ""
kind: Config
preferences: {}
users:
- name: multinode-675000-m01
user:
client-certificate: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m01/client.crt
client-key: /Users/jenkins/minikube-integration/17734-1975/.minikube/profiles/multinode-675000-m01/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-183000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-183000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-183000"

                                                
                                                
----------------------- debugLogs end: cilium-183000 [took: 5.859986164s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-183000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-183000
--- SKIP: TestNetworkPlugins/group/cilium (6.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-644000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-644000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                    
Copied to clipboard