Test Report: QEMU_macOS 19700

                    
                      8b226b9d2c09f79dcc3a887682b5a6bd27a95904:2024-09-24:36357
                    
                

Test fail (99/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.7
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.27
22 TestOffline 9.94
33 TestAddons/parallel/Registry 71.37
45 TestCertOptions 10.13
46 TestCertExpiration 195.44
47 TestDockerFlags 10.32
48 TestForceSystemdFlag 10.16
49 TestForceSystemdEnv 11.54
94 TestFunctional/parallel/ServiceCmdConnect 42.79
166 TestMultiControlPlane/serial/StopSecondaryNode 162.25
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 150.14
168 TestMultiControlPlane/serial/RestartSecondaryNode 185.28
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.54
171 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
173 TestMultiControlPlane/serial/StopCluster 300.23
174 TestMultiControlPlane/serial/RestartCluster 5.25
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
176 TestMultiControlPlane/serial/AddSecondaryNode 0.07
180 TestImageBuild/serial/Setup 9.95
183 TestJSONOutput/start/Command 9.83
189 TestJSONOutput/pause/Command 0.08
195 TestJSONOutput/unpause/Command 0.05
212 TestMinikubeProfile 10.22
215 TestMountStart/serial/StartWithMountFirst 9.97
218 TestMultiNode/serial/FreshStart2Nodes 9.85
219 TestMultiNode/serial/DeployApp2Nodes 73.95
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.07
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.08
224 TestMultiNode/serial/CopyFile 0.06
225 TestMultiNode/serial/StopNode 0.14
226 TestMultiNode/serial/StartAfterStop 45.74
227 TestMultiNode/serial/RestartKeepsNodes 9
228 TestMultiNode/serial/DeleteNode 0.1
229 TestMultiNode/serial/StopMultiNode 3.36
230 TestMultiNode/serial/RestartMultiNode 5.25
231 TestMultiNode/serial/ValidateNameConflict 20.33
235 TestPreload 10.1
237 TestScheduledStopUnix 10.07
238 TestSkaffold 13.06
241 TestRunningBinaryUpgrade 598.36
243 TestKubernetesUpgrade 17.3
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.38
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.09
259 TestStoppedBinaryUpgrade/Upgrade 574.4
261 TestPause/serial/Start 9.93
271 TestNoKubernetes/serial/StartWithK8s 9.85
272 TestNoKubernetes/serial/StartWithStopK8s 5.29
273 TestNoKubernetes/serial/Start 5.33
277 TestNoKubernetes/serial/StartNoArgs 5.3
279 TestNetworkPlugins/group/auto/Start 9.91
280 TestNetworkPlugins/group/kindnet/Start 9.79
281 TestNetworkPlugins/group/flannel/Start 9.82
282 TestNetworkPlugins/group/enable-default-cni/Start 9.86
283 TestNetworkPlugins/group/bridge/Start 9.95
284 TestNetworkPlugins/group/kubenet/Start 9.78
285 TestNetworkPlugins/group/custom-flannel/Start 9.92
286 TestNetworkPlugins/group/calico/Start 9.9
287 TestNetworkPlugins/group/false/Start 9.9
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.79
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
295 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/old-k8s-version/serial/Pause 0.11
301 TestStartStop/group/no-preload/serial/FirstStart 9.91
303 TestStartStop/group/embed-certs/serial/FirstStart 10.64
304 TestStartStop/group/no-preload/serial/DeployApp 0.1
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.14
308 TestStartStop/group/no-preload/serial/SecondStart 7.26
309 TestStartStop/group/embed-certs/serial/DeployApp 0.1
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
312 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.09
314 TestStartStop/group/no-preload/serial/Pause 0.1
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.96
319 TestStartStop/group/embed-certs/serial/SecondStart 6.86
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
321 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
324 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/embed-certs/serial/Pause 0.1
328 TestStartStop/group/newest-cni/serial/FirstStart 9.92
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.61
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.07
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
339 TestStartStop/group/newest-cni/serial/SecondStart 5.26
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (19.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-823000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-823000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (19.696568333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f4a1e847-cd4b-4f4e-a308-12715446c3b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-823000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"24c78f5a-504a-47d6-8beb-229cdd278b64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19700"}}
	{"specversion":"1.0","id":"b20ba539-165e-45c2-a912-a7f0bb321850","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig"}}
	{"specversion":"1.0","id":"e7709f9a-70d1-4ec0-98dc-2ec7c9d4719b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a25024da-9ea2-4ba5-9875-8a6336bc4fcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1db9ff7b-0bc3-4216-afab-3640500e0e57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube"}}
	{"specversion":"1.0","id":"dc484dde-43e6-489d-84b9-295512a7191f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"5c689282-c626-48a2-a989-d1cee53b8047","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9b7a6bf-3eca-419f-b4e2-f3f1c941af2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"5eda0943-5516-448f-9c05-a5229d4add13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f523b445-3688-42b0-a0da-1aaceec64d99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-823000\" primary control-plane node in \"download-only-823000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d72c7dd2-3cca-4c71-99b0-3c6bbfc64b69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e808635e-8539-4b30-9452-431ea697a268","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0] Decompressors:map[bz2:0x14000803730 gz:0x14000803738 tar:0x140008036e0 tar.bz2:0x140008036f0 tar.gz:0x14000803700 tar.xz:0x14000803710 tar.zst:0x14000803720 tbz2:0x140008036f0 tgz:0x14
000803700 txz:0x14000803710 tzst:0x14000803720 xz:0x14000803740 zip:0x14000803750 zst:0x14000803748] Getters:map[file:0x1400078cac0 http:0x1400071e2d0 https:0x1400071e320] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"caa7dc2a-8592-45d3-9e42-c5e720b8a975","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 11:18:44.101896    1599 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:18:44.102047    1599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:18:44.102051    1599 out.go:358] Setting ErrFile to fd 2...
	I0924 11:18:44.102053    1599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:18:44.102182    1599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	W0924 11:18:44.102266    1599 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19700-1081/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19700-1081/.minikube/config/config.json: no such file or directory
	I0924 11:18:44.103511    1599 out.go:352] Setting JSON to true
	I0924 11:18:44.121049    1599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1095,"bootTime":1727200829,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 11:18:44.121110    1599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 11:18:44.127342    1599 out.go:97] [download-only-823000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 11:18:44.127502    1599 notify.go:220] Checking for updates...
	W0924 11:18:44.127572    1599 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 11:18:44.131312    1599 out.go:169] MINIKUBE_LOCATION=19700
	I0924 11:18:44.134479    1599 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 11:18:44.138367    1599 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 11:18:44.141420    1599 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 11:18:44.144289    1599 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	W0924 11:18:44.150297    1599 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 11:18:44.150537    1599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 11:18:44.155252    1599 out.go:97] Using the qemu2 driver based on user configuration
	I0924 11:18:44.155270    1599 start.go:297] selected driver: qemu2
	I0924 11:18:44.155283    1599 start.go:901] validating driver "qemu2" against <nil>
	I0924 11:18:44.155354    1599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 11:18:44.158281    1599 out.go:169] Automatically selected the socket_vmnet network
	I0924 11:18:44.163803    1599 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0924 11:18:44.163928    1599 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 11:18:44.163981    1599 cni.go:84] Creating CNI manager for ""
	I0924 11:18:44.164019    1599 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0924 11:18:44.164070    1599 start.go:340] cluster config:
	{Name:download-only-823000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-823000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:18:44.169433    1599 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 11:18:44.172358    1599 out.go:97] Downloading VM boot image ...
	I0924 11:18:44.172380    1599 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0924 11:18:52.176743    1599 out.go:97] Starting "download-only-823000" primary control-plane node in "download-only-823000" cluster
	I0924 11:18:52.176762    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 11:18:52.231505    1599 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0924 11:18:52.231511    1599 cache.go:56] Caching tarball of preloaded images
	I0924 11:18:52.231707    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 11:18:52.235096    1599 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0924 11:18:52.235103    1599 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0924 11:18:52.326372    1599 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0924 11:19:02.441698    1599 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0924 11:19:02.441878    1599 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0924 11:19:03.138846    1599 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0924 11:19:03.139055    1599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/download-only-823000/config.json ...
	I0924 11:19:03.139075    1599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/download-only-823000/config.json: {Name:mkea315355728d670f6c8314367e1e532a813e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:03.139354    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 11:19:03.139561    1599 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0924 11:19:03.722691    1599 out.go:193] 
	W0924 11:19:03.728968    1599 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0] Decompressors:map[bz2:0x14000803730 gz:0x14000803738 tar:0x140008036e0 tar.bz2:0x140008036f0 tar.gz:0x14000803700 tar.xz:0x14000803710 tar.zst:0x14000803720 tbz2:0x140008036f0 tgz:0x14000803700 txz:0x14000803710 tzst:0x14000803720 xz:0x14000803740 zip:0x14000803750 zst:0x14000803748] Getters:map[file:0x1400078cac0 http:0x1400071e2d0 https:0x1400071e320] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0924 11:19:03.728997    1599 out_reason.go:110] 
	W0924 11:19:03.736666    1599 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 11:19:03.739788    1599 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-823000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (19.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.27s)

                                                
                                                
=== RUN   TestBinaryMirror
I0924 11:19:12.952536    1598 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-079000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-079000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 : exit status 40 (165.445583ms)

                                                
                                                
-- stdout --
	* [binary-mirror-079000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-079000" primary control-plane node in "binary-mirror-079000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 11:19:13.012056    1664 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:19:13.012189    1664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:19:13.012192    1664 out.go:358] Setting ErrFile to fd 2...
	I0924 11:19:13.012195    1664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:19:13.012318    1664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 11:19:13.013443    1664 out.go:352] Setting JSON to false
	I0924 11:19:13.029536    1664 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1124,"bootTime":1727200829,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 11:19:13.029607    1664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 11:19:13.034254    1664 out.go:177] * [binary-mirror-079000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 11:19:13.043230    1664 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 11:19:13.043281    1664 notify.go:220] Checking for updates...
	I0924 11:19:13.050166    1664 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 11:19:13.053200    1664 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 11:19:13.056187    1664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 11:19:13.059220    1664 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 11:19:13.062329    1664 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 11:19:13.066211    1664 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 11:19:13.073189    1664 start.go:297] selected driver: qemu2
	I0924 11:19:13.073197    1664 start.go:901] validating driver "qemu2" against <nil>
	I0924 11:19:13.073264    1664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 11:19:13.076179    1664 out.go:177] * Automatically selected the socket_vmnet network
	I0924 11:19:13.081408    1664 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0924 11:19:13.081550    1664 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 11:19:13.081570    1664 cni.go:84] Creating CNI manager for ""
	I0924 11:19:13.081594    1664 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 11:19:13.081601    1664 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 11:19:13.081649    1664 start.go:340] cluster config:
	{Name:binary-mirror-079000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-079000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49312 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:19:13.085272    1664 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 11:19:13.093214    1664 out.go:177] * Starting "binary-mirror-079000" primary control-plane node in "binary-mirror-079000" cluster
	I0924 11:19:13.097186    1664 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 11:19:13.097204    1664 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 11:19:13.097213    1664 cache.go:56] Caching tarball of preloaded images
	I0924 11:19:13.097290    1664 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 11:19:13.097296    1664 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 11:19:13.097495    1664 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/binary-mirror-079000/config.json ...
	I0924 11:19:13.097506    1664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/binary-mirror-079000/config.json: {Name:mk6dfccc3fbc76c59df1c0c69382da6fcd5e2546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:13.097860    1664 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 11:19:13.097914    1664 download.go:107] Downloading: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0924 11:19:13.122331    1664 out.go:201] 
	W0924 11:19:13.126204    1664 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1067956c0 0x1067956c0 0x1067956c0 0x1067956c0 0x1067956c0 0x1067956c0 0x1067956c0] Decompressors:map[bz2:0x140003f37b0 gz:0x140003f37b8 tar:0x140003f3760 tar.bz2:0x140003f3770 tar.gz:0x140003f3780 tar.xz:0x140003f3790 tar.zst:0x140003f37a0 tbz2:0x140003f3770 tgz:0x140003f3780 txz:0x140003f3790 tzst:0x140003f37a0 xz:0x140003f37c0 zip:0x140003f37d0 zst:0x140003f37c8] Getters:map[file:0x14000597220 http:0x14000707bd0 https:0x14000707c20] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1067956c0 0x1067956c0 0x1067956c0 0x1067956c0 0x1067956c0 0x1067956c0 0x1067956c0] Decompressors:map[bz2:0x140003f37b0 gz:0x140003f37b8 tar:0x140003f3760 tar.bz2:0x140003f3770 tar.gz:0x140003f3780 tar.xz:0x140003f3790 tar.zst:0x140003f37a0 tbz2:0x140003f3770 tgz:0x140003f3780 txz:0x140003f3790 tzst:0x140003f37a0 xz:0x140003f37c0 zip:0x140003f37d0 zst:0x140003f37c8] Getters:map[file:0x14000597220 http:0x14000707bd0 https:0x14000707c20] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0924 11:19:13.126215    1664 out.go:270] * 
	* 
	W0924 11:19:13.126664    1664 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 11:19:13.141289    1664 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-079000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49312" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-079000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-079000
--- FAIL: TestBinaryMirror (0.27s)

                                                
                                    
x
+
TestOffline (9.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-215000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-215000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.788134125s)

                                                
                                                
-- stdout --
	* [offline-docker-215000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-215000" primary control-plane node in "offline-docker-215000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-215000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:07:02.869569    4104 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:07:02.869714    4104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:07:02.869717    4104 out.go:358] Setting ErrFile to fd 2...
	I0924 12:07:02.869720    4104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:07:02.869851    4104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:07:02.871072    4104 out.go:352] Setting JSON to false
	I0924 12:07:02.888948    4104 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3993,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:07:02.889025    4104 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:07:02.893431    4104 out.go:177] * [offline-docker-215000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:07:02.900271    4104 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:07:02.900323    4104 notify.go:220] Checking for updates...
	I0924 12:07:02.908301    4104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:07:02.911209    4104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:07:02.912422    4104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:07:02.915188    4104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:07:02.918250    4104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:07:02.921558    4104 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:07:02.921615    4104 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:07:02.925147    4104 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:07:02.932220    4104 start.go:297] selected driver: qemu2
	I0924 12:07:02.932228    4104 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:07:02.932235    4104 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:07:02.934042    4104 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:07:02.937122    4104 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:07:02.940246    4104 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:07:02.940269    4104 cni.go:84] Creating CNI manager for ""
	I0924 12:07:02.940292    4104 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:07:02.940300    4104 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:07:02.940349    4104 start.go:340] cluster config:
	{Name:offline-docker-215000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-215000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:07:02.944036    4104 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:07:02.951244    4104 out.go:177] * Starting "offline-docker-215000" primary control-plane node in "offline-docker-215000" cluster
	I0924 12:07:02.955209    4104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:07:02.955232    4104 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:07:02.955241    4104 cache.go:56] Caching tarball of preloaded images
	I0924 12:07:02.955322    4104 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:07:02.955328    4104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:07:02.955392    4104 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/offline-docker-215000/config.json ...
	I0924 12:07:02.955408    4104 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/offline-docker-215000/config.json: {Name:mkc3a0a08d4adb222a6bc07a80e050c3b1c4af28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:07:02.955705    4104 start.go:360] acquireMachinesLock for offline-docker-215000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:07:02.955743    4104 start.go:364] duration metric: took 29.292µs to acquireMachinesLock for "offline-docker-215000"
	I0924 12:07:02.955756    4104 start.go:93] Provisioning new machine with config: &{Name:offline-docker-215000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-215000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:07:02.955780    4104 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:07:02.960204    4104 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0924 12:07:02.975928    4104 start.go:159] libmachine.API.Create for "offline-docker-215000" (driver="qemu2")
	I0924 12:07:02.975956    4104 client.go:168] LocalClient.Create starting
	I0924 12:07:02.976030    4104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:07:02.976061    4104 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:02.976070    4104 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:02.976115    4104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:07:02.976138    4104 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:02.976146    4104 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:02.976507    4104 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:07:03.134577    4104 main.go:141] libmachine: Creating SSH key...
	I0924 12:07:03.222943    4104 main.go:141] libmachine: Creating Disk image...
	I0924 12:07:03.222956    4104 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:07:03.223185    4104 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2
	I0924 12:07:03.241440    4104 main.go:141] libmachine: STDOUT: 
	I0924 12:07:03.241464    4104 main.go:141] libmachine: STDERR: 
	I0924 12:07:03.241530    4104 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2 +20000M
	I0924 12:07:03.250067    4104 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:07:03.250088    4104 main.go:141] libmachine: STDERR: 
	I0924 12:07:03.250115    4104 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2
	I0924 12:07:03.250121    4104 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:07:03.250134    4104 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:07:03.250163    4104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:32:ff:06:98:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2
	I0924 12:07:03.251912    4104 main.go:141] libmachine: STDOUT: 
	I0924 12:07:03.251928    4104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:07:03.251954    4104 client.go:171] duration metric: took 275.9915ms to LocalClient.Create
	I0924 12:07:05.254016    4104 start.go:128] duration metric: took 2.298241s to createHost
	I0924 12:07:05.254034    4104 start.go:83] releasing machines lock for "offline-docker-215000", held for 2.298297875s
	W0924 12:07:05.254044    4104 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:05.266186    4104 out.go:177] * Deleting "offline-docker-215000" in qemu2 ...
	W0924 12:07:05.277712    4104 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:05.277726    4104 start.go:729] Will try again in 5 seconds ...
	I0924 12:07:10.279870    4104 start.go:360] acquireMachinesLock for offline-docker-215000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:07:10.280048    4104 start.go:364] duration metric: took 134.583µs to acquireMachinesLock for "offline-docker-215000"
	I0924 12:07:10.280094    4104 start.go:93] Provisioning new machine with config: &{Name:offline-docker-215000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-215000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:07:10.280180    4104 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:07:10.291492    4104 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0924 12:07:10.318880    4104 start.go:159] libmachine.API.Create for "offline-docker-215000" (driver="qemu2")
	I0924 12:07:10.318925    4104 client.go:168] LocalClient.Create starting
	I0924 12:07:10.319005    4104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:07:10.319053    4104 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:10.319065    4104 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:10.319109    4104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:07:10.319143    4104 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:10.319153    4104 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:10.320152    4104 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:07:10.488121    4104 main.go:141] libmachine: Creating SSH key...
	I0924 12:07:10.555521    4104 main.go:141] libmachine: Creating Disk image...
	I0924 12:07:10.555527    4104 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:07:10.555720    4104 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2
	I0924 12:07:10.564921    4104 main.go:141] libmachine: STDOUT: 
	I0924 12:07:10.564940    4104 main.go:141] libmachine: STDERR: 
	I0924 12:07:10.565005    4104 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2 +20000M
	I0924 12:07:10.572703    4104 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:07:10.572723    4104 main.go:141] libmachine: STDERR: 
	I0924 12:07:10.572735    4104 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2
	I0924 12:07:10.572740    4104 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:07:10.572750    4104 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:07:10.572788    4104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:c6:24:e3:ef:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/offline-docker-215000/disk.qcow2
	I0924 12:07:10.574287    4104 main.go:141] libmachine: STDOUT: 
	I0924 12:07:10.574298    4104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:07:10.574314    4104 client.go:171] duration metric: took 255.386ms to LocalClient.Create
	I0924 12:07:12.576502    4104 start.go:128] duration metric: took 2.296306583s to createHost
	I0924 12:07:12.576630    4104 start.go:83] releasing machines lock for "offline-docker-215000", held for 2.296539666s
	W0924 12:07:12.576994    4104 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-215000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-215000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:12.593789    4104 out.go:201] 
	W0924 12:07:12.597763    4104 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:07:12.597808    4104 out.go:270] * 
	* 
	W0924 12:07:12.600243    4104 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:07:12.616722    4104 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-215000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-24 12:07:12.628981 -0700 PDT m=+2908.564586042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-215000 -n offline-docker-215000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-215000 -n offline-docker-215000: exit status 7 (67.995167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-215000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-215000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-215000
--- FAIL: TestOffline (9.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.301875ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-dr9lr" [56205fad-453c-44bc-b682-3000f315999a] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003447625s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jxwjp" [c4fce1ef-d25b-40aa-add1-32efc614c74c] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010257709s
addons_test.go:338: (dbg) Run:  kubectl --context addons-472000 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-472000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-472000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.078528583s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-472000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 ip
2024/09/24 11:32:21 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-472000 -n addons-472000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-823000 | jenkins | v1.34.0 | 24 Sep 24 11:18 PDT |                     |
	|         | -p download-only-823000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT | 24 Sep 24 11:19 PDT |
	| delete  | -p download-only-823000                                                                     | download-only-823000 | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT | 24 Sep 24 11:19 PDT |
	| start   | -o=json --download-only                                                                     | download-only-295000 | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT |                     |
	|         | -p download-only-295000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT | 24 Sep 24 11:19 PDT |
	| delete  | -p download-only-295000                                                                     | download-only-295000 | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT | 24 Sep 24 11:19 PDT |
	| delete  | -p download-only-823000                                                                     | download-only-823000 | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT | 24 Sep 24 11:19 PDT |
	| delete  | -p download-only-295000                                                                     | download-only-295000 | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT | 24 Sep 24 11:19 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-079000 | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT |                     |
	|         | binary-mirror-079000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-079000                                                                     | binary-mirror-079000 | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT | 24 Sep 24 11:19 PDT |
	| addons  | disable dashboard -p                                                                        | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT |                     |
	|         | addons-472000                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT |                     |
	|         | addons-472000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-472000 --wait=true                                                                | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT | 24 Sep 24 11:22 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-472000 addons disable                                                                | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:22 PDT | 24 Sep 24 11:23 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:31 PDT | 24 Sep 24 11:31 PDT |
	|         | -p addons-472000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-472000 addons disable                                                                | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:31 PDT | 24 Sep 24 11:31 PDT |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-472000 addons disable                                                                | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:31 PDT | 24 Sep 24 11:31 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:31 PDT | 24 Sep 24 11:31 PDT |
	|         | -p addons-472000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-472000 ssh cat                                                                       | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:31 PDT | 24 Sep 24 11:31 PDT |
	|         | /opt/local-path-provisioner/pvc-5d9acef8-c72c-4cb0-b678-9b4ebcfd0da9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-472000 addons disable                                                                | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:31 PDT |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-472000 ip                                                                            | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:32 PDT | 24 Sep 24 11:32 PDT |
	| addons  | addons-472000 addons disable                                                                | addons-472000        | jenkins | v1.34.0 | 24 Sep 24 11:32 PDT | 24 Sep 24 11:32 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 11:19:13
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 11:19:13.306675    1678 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:19:13.307032    1678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:19:13.307037    1678 out.go:358] Setting ErrFile to fd 2...
	I0924 11:19:13.307040    1678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:19:13.307253    1678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 11:19:13.308627    1678 out.go:352] Setting JSON to false
	I0924 11:19:13.325018    1678 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1124,"bootTime":1727200829,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 11:19:13.325101    1678 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 11:19:13.330248    1678 out.go:177] * [addons-472000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 11:19:13.337239    1678 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 11:19:13.337294    1678 notify.go:220] Checking for updates...
	I0924 11:19:13.344152    1678 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 11:19:13.347185    1678 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 11:19:13.350230    1678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 11:19:13.353194    1678 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 11:19:13.356253    1678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 11:19:13.359407    1678 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 11:19:13.362151    1678 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 11:19:13.369173    1678 start.go:297] selected driver: qemu2
	I0924 11:19:13.369180    1678 start.go:901] validating driver "qemu2" against <nil>
	I0924 11:19:13.369186    1678 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 11:19:13.371418    1678 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 11:19:13.372725    1678 out.go:177] * Automatically selected the socket_vmnet network
	I0924 11:19:13.375309    1678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 11:19:13.375324    1678 cni.go:84] Creating CNI manager for ""
	I0924 11:19:13.375348    1678 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 11:19:13.375352    1678 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 11:19:13.375384    1678 start.go:340] cluster config:
	{Name:addons-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:19:13.379187    1678 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 11:19:13.388196    1678 out.go:177] * Starting "addons-472000" primary control-plane node in "addons-472000" cluster
	I0924 11:19:13.392180    1678 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 11:19:13.392196    1678 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 11:19:13.392206    1678 cache.go:56] Caching tarball of preloaded images
	I0924 11:19:13.392285    1678 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 11:19:13.392290    1678 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 11:19:13.392481    1678 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/config.json ...
	I0924 11:19:13.392492    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/config.json: {Name:mkc34c052a82429898d988b67be9f629946933da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:13.392760    1678 start.go:360] acquireMachinesLock for addons-472000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 11:19:13.392819    1678 start.go:364] duration metric: took 53.292µs to acquireMachinesLock for "addons-472000"
	I0924 11:19:13.392830    1678 start.go:93] Provisioning new machine with config: &{Name:addons-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 11:19:13.392856    1678 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 11:19:13.400183    1678 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0924 11:19:13.626741    1678 start.go:159] libmachine.API.Create for "addons-472000" (driver="qemu2")
	I0924 11:19:13.626793    1678 client.go:168] LocalClient.Create starting
	I0924 11:19:13.626980    1678 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 11:19:13.758131    1678 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 11:19:13.856301    1678 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 11:19:14.216188    1678 main.go:141] libmachine: Creating SSH key...
	I0924 11:19:14.308939    1678 main.go:141] libmachine: Creating Disk image...
	I0924 11:19:14.308947    1678 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 11:19:14.309205    1678 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/disk.qcow2
	I0924 11:19:14.368425    1678 main.go:141] libmachine: STDOUT: 
	I0924 11:19:14.368449    1678 main.go:141] libmachine: STDERR: 
	I0924 11:19:14.368523    1678 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/disk.qcow2 +20000M
	I0924 11:19:14.376513    1678 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 11:19:14.376535    1678 main.go:141] libmachine: STDERR: 
	I0924 11:19:14.376550    1678 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/disk.qcow2
	I0924 11:19:14.376553    1678 main.go:141] libmachine: Starting QEMU VM...
	I0924 11:19:14.376593    1678 qemu.go:418] Using hvf for hardware acceleration
	I0924 11:19:14.376630    1678 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:60:44:e8:fd:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/disk.qcow2
	I0924 11:19:14.510606    1678 main.go:141] libmachine: STDOUT: 
	I0924 11:19:14.510629    1678 main.go:141] libmachine: STDERR: 
	I0924 11:19:14.510633    1678 main.go:141] libmachine: Attempt 0
	I0924 11:19:14.510649    1678 main.go:141] libmachine: Searching for e:60:44:e8:fd:21 in /var/db/dhcpd_leases ...
	I0924 11:19:14.510725    1678 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0924 11:19:14.510747    1678 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f44fca}
	I0924 11:19:16.512944    1678 main.go:141] libmachine: Attempt 1
	I0924 11:19:16.513084    1678 main.go:141] libmachine: Searching for e:60:44:e8:fd:21 in /var/db/dhcpd_leases ...
	I0924 11:19:16.513478    1678 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0924 11:19:16.513527    1678 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f44fca}
	I0924 11:19:18.514929    1678 main.go:141] libmachine: Attempt 2
	I0924 11:19:18.515025    1678 main.go:141] libmachine: Searching for e:60:44:e8:fd:21 in /var/db/dhcpd_leases ...
	I0924 11:19:18.515462    1678 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0924 11:19:18.515522    1678 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f44fca}
	I0924 11:19:20.517700    1678 main.go:141] libmachine: Attempt 3
	I0924 11:19:20.517756    1678 main.go:141] libmachine: Searching for e:60:44:e8:fd:21 in /var/db/dhcpd_leases ...
	I0924 11:19:20.517882    1678 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0924 11:19:20.517902    1678 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f44fca}
	I0924 11:19:22.519924    1678 main.go:141] libmachine: Attempt 4
	I0924 11:19:22.519935    1678 main.go:141] libmachine: Searching for e:60:44:e8:fd:21 in /var/db/dhcpd_leases ...
	I0924 11:19:22.519965    1678 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0924 11:19:22.519971    1678 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f44fca}
	I0924 11:19:24.521980    1678 main.go:141] libmachine: Attempt 5
	I0924 11:19:24.521992    1678 main.go:141] libmachine: Searching for e:60:44:e8:fd:21 in /var/db/dhcpd_leases ...
	I0924 11:19:24.522019    1678 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0924 11:19:24.522025    1678 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f44fca}
	I0924 11:19:26.523298    1678 main.go:141] libmachine: Attempt 6
	I0924 11:19:26.523376    1678 main.go:141] libmachine: Searching for e:60:44:e8:fd:21 in /var/db/dhcpd_leases ...
	I0924 11:19:26.523455    1678 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0924 11:19:26.523463    1678 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66f44fca}
	I0924 11:19:28.525510    1678 main.go:141] libmachine: Attempt 7
	I0924 11:19:28.525532    1678 main.go:141] libmachine: Searching for e:60:44:e8:fd:21 in /var/db/dhcpd_leases ...
	I0924 11:19:28.525673    1678 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0924 11:19:28.525686    1678 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e:60:44:e8:fd:21 ID:1,e:60:44:e8:fd:21 Lease:0x66f4542f}
	I0924 11:19:28.525690    1678 main.go:141] libmachine: Found match: e:60:44:e8:fd:21
	I0924 11:19:28.525699    1678 main.go:141] libmachine: IP: 192.168.105.2
	I0924 11:19:28.525703    1678 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0924 11:19:30.546085    1678 machine.go:93] provisionDockerMachine start ...
	I0924 11:19:30.547587    1678 main.go:141] libmachine: Using SSH client type: native
	I0924 11:19:30.548022    1678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102569c00] 0x10256c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0924 11:19:30.548039    1678 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 11:19:30.622887    1678 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 11:19:30.622913    1678 buildroot.go:166] provisioning hostname "addons-472000"
	I0924 11:19:30.623063    1678 main.go:141] libmachine: Using SSH client type: native
	I0924 11:19:30.623292    1678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102569c00] 0x10256c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0924 11:19:30.623303    1678 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-472000 && echo "addons-472000" | sudo tee /etc/hostname
	I0924 11:19:30.690166    1678 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-472000
	
	I0924 11:19:30.690268    1678 main.go:141] libmachine: Using SSH client type: native
	I0924 11:19:30.690445    1678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102569c00] 0x10256c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0924 11:19:30.690456    1678 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-472000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-472000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-472000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 11:19:30.745563    1678 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 11:19:30.745575    1678 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19700-1081/.minikube CaCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19700-1081/.minikube}
	I0924 11:19:30.745590    1678 buildroot.go:174] setting up certificates
	I0924 11:19:30.745595    1678 provision.go:84] configureAuth start
	I0924 11:19:30.745601    1678 provision.go:143] copyHostCerts
	I0924 11:19:30.745686    1678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.pem (1078 bytes)
	I0924 11:19:30.745933    1678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/cert.pem (1123 bytes)
	I0924 11:19:30.746054    1678 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/key.pem (1675 bytes)
	I0924 11:19:30.746155    1678 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem org=jenkins.addons-472000 san=[127.0.0.1 192.168.105.2 addons-472000 localhost minikube]
	I0924 11:19:30.813639    1678 provision.go:177] copyRemoteCerts
	I0924 11:19:30.813699    1678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 11:19:30.813716    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:30.843200    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 11:19:30.851809    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 11:19:30.860248    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 11:19:30.868471    1678 provision.go:87] duration metric: took 122.856917ms to configureAuth
	I0924 11:19:30.868480    1678 buildroot.go:189] setting minikube options for container-runtime
	I0924 11:19:30.868575    1678 config.go:182] Loaded profile config "addons-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 11:19:30.868615    1678 main.go:141] libmachine: Using SSH client type: native
	I0924 11:19:30.868698    1678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102569c00] 0x10256c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0924 11:19:30.868703    1678 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0924 11:19:30.917558    1678 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0924 11:19:30.917566    1678 buildroot.go:70] root file system type: tmpfs
	I0924 11:19:30.917622    1678 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0924 11:19:30.917675    1678 main.go:141] libmachine: Using SSH client type: native
	I0924 11:19:30.917775    1678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102569c00] 0x10256c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0924 11:19:30.917807    1678 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0924 11:19:30.973902    1678 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0924 11:19:30.973961    1678 main.go:141] libmachine: Using SSH client type: native
	I0924 11:19:30.974062    1678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102569c00] 0x10256c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0924 11:19:30.974072    1678 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0924 11:19:32.333496    1678 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0924 11:19:32.333546    1678 machine.go:96] duration metric: took 1.787454625s to provisionDockerMachine
	I0924 11:19:32.333553    1678 client.go:171] duration metric: took 18.706997209s to LocalClient.Create
	I0924 11:19:32.333564    1678 start.go:167] duration metric: took 18.707069209s to libmachine.API.Create "addons-472000"
	I0924 11:19:32.333569    1678 start.go:293] postStartSetup for "addons-472000" (driver="qemu2")
	I0924 11:19:32.333575    1678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 11:19:32.333653    1678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 11:19:32.333665    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:32.361950    1678 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 11:19:32.363836    1678 info.go:137] Remote host: Buildroot 2023.02.9
	I0924 11:19:32.363846    1678 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19700-1081/.minikube/addons for local assets ...
	I0924 11:19:32.363958    1678 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19700-1081/.minikube/files for local assets ...
	I0924 11:19:32.363990    1678 start.go:296] duration metric: took 30.418291ms for postStartSetup
	I0924 11:19:32.364401    1678 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/config.json ...
	I0924 11:19:32.364593    1678 start.go:128] duration metric: took 18.971977708s to createHost
	I0924 11:19:32.364622    1678 main.go:141] libmachine: Using SSH client type: native
	I0924 11:19:32.364710    1678 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102569c00] 0x10256c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0924 11:19:32.364717    1678 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 11:19:32.413291    1678 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727201971.949261044
	
	I0924 11:19:32.413300    1678 fix.go:216] guest clock: 1727201971.949261044
	I0924 11:19:32.413304    1678 fix.go:229] Guest: 2024-09-24 11:19:31.949261044 -0700 PDT Remote: 2024-09-24 11:19:32.364596 -0700 PDT m=+19.076665585 (delta=-415.334956ms)
	I0924 11:19:32.413320    1678 fix.go:200] guest clock delta is within tolerance: -415.334956ms
	I0924 11:19:32.413323    1678 start.go:83] releasing machines lock for "addons-472000", held for 19.020745166s
	I0924 11:19:32.413637    1678 ssh_runner.go:195] Run: cat /version.json
	I0924 11:19:32.413654    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:32.413639    1678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 11:19:32.413691    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:32.441508    1678 ssh_runner.go:195] Run: systemctl --version
	I0924 11:19:32.483682    1678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 11:19:32.485610    1678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 11:19:32.485644    1678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 11:19:32.491666    1678 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 11:19:32.491674    1678 start.go:495] detecting cgroup driver to use...
	I0924 11:19:32.491808    1678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 11:19:32.498499    1678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0924 11:19:32.502222    1678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0924 11:19:32.505955    1678 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0924 11:19:32.505990    1678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0924 11:19:32.509627    1678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 11:19:32.513054    1678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0924 11:19:32.516442    1678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 11:19:32.520164    1678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 11:19:32.524049    1678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0924 11:19:32.528007    1678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0924 11:19:32.531849    1678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0924 11:19:32.535720    1678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 11:19:32.539307    1678 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0924 11:19:32.539335    1678 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0924 11:19:32.543703    1678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 11:19:32.547608    1678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 11:19:32.626363    1678 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0924 11:19:32.633394    1678 start.go:495] detecting cgroup driver to use...
	I0924 11:19:32.633451    1678 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0924 11:19:32.641522    1678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 11:19:32.647079    1678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 11:19:32.658172    1678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 11:19:32.663479    1678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 11:19:32.668935    1678 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0924 11:19:32.707315    1678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 11:19:32.713524    1678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 11:19:32.719526    1678 ssh_runner.go:195] Run: which cri-dockerd
	I0924 11:19:32.720886    1678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0924 11:19:32.724181    1678 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0924 11:19:32.730003    1678 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0924 11:19:32.797303    1678 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0924 11:19:32.867711    1678 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0924 11:19:32.867761    1678 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0924 11:19:32.874232    1678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 11:19:32.940595    1678 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0924 11:19:35.126021    1678 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.185438625s)
	I0924 11:19:35.126096    1678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0924 11:19:35.131535    1678 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0924 11:19:35.138814    1678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 11:19:35.144160    1678 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0924 11:19:35.214725    1678 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0924 11:19:35.291183    1678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 11:19:35.356705    1678 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0924 11:19:35.363615    1678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 11:19:35.369254    1678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 11:19:35.437535    1678 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0924 11:19:35.462574    1678 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0924 11:19:35.462676    1678 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0924 11:19:35.464892    1678 start.go:563] Will wait 60s for crictl version
	I0924 11:19:35.464941    1678 ssh_runner.go:195] Run: which crictl
	I0924 11:19:35.466402    1678 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 11:19:35.484559    1678 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0924 11:19:35.484637    1678 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0924 11:19:35.495306    1678 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0924 11:19:35.518988    1678 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0924 11:19:35.519145    1678 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0924 11:19:35.520836    1678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 11:19:35.525010    1678 kubeadm.go:883] updating cluster {Name:addons-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 11:19:35.525062    1678 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 11:19:35.525122    1678 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0924 11:19:35.530403    1678 docker.go:685] Got preloaded images: 
	I0924 11:19:35.530411    1678 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0924 11:19:35.530455    1678 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0924 11:19:35.533988    1678 ssh_runner.go:195] Run: which lz4
	I0924 11:19:35.535566    1678 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 11:19:35.537161    1678 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 11:19:35.537170    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0924 11:19:36.781842    1678 docker.go:649] duration metric: took 1.246335042s to copy over tarball
	I0924 11:19:36.781910    1678 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 11:19:37.738049    1678 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 11:19:37.753054    1678 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0924 11:19:37.756520    1678 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0924 11:19:37.762610    1678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 11:19:37.833885    1678 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0924 11:19:40.658854    1678 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.824988083s)
	I0924 11:19:40.658976    1678 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0924 11:19:40.665595    1678 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0924 11:19:40.665612    1678 cache_images.go:84] Images are preloaded, skipping loading
	I0924 11:19:40.665617    1678 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0924 11:19:40.665670    1678 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-472000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 11:19:40.665735    1678 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0924 11:19:40.686084    1678 cni.go:84] Creating CNI manager for ""
	I0924 11:19:40.686099    1678 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 11:19:40.686107    1678 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 11:19:40.686120    1678 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-472000 NodeName:addons-472000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 11:19:40.686206    1678 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-472000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 11:19:40.686283    1678 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 11:19:40.690214    1678 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 11:19:40.690249    1678 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 11:19:40.693475    1678 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0924 11:19:40.699411    1678 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 11:19:40.705368    1678 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0924 11:19:40.711555    1678 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0924 11:19:40.713155    1678 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 11:19:40.717233    1678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 11:19:40.806018    1678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 11:19:40.813295    1678 certs.go:68] Setting up /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000 for IP: 192.168.105.2
	I0924 11:19:40.813313    1678 certs.go:194] generating shared ca certs ...
	I0924 11:19:40.813323    1678 certs.go:226] acquiring lock for ca certs: {Name:mk724855f1a91a4bb17b52053043bbe8bd1cc119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:40.813509    1678 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.key
	I0924 11:19:40.985555    1678 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt ...
	I0924 11:19:40.985570    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt: {Name:mk8d221dd3baa5fb29fc32d828da8bd4f2922be4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:40.985929    1678 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.key ...
	I0924 11:19:40.985932    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.key: {Name:mk3fe103f7ff13d067be141de5363c80042311e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:40.986061    1678 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.key
	I0924 11:19:41.105653    1678 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.crt ...
	I0924 11:19:41.105657    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.crt: {Name:mk19947cd88edc1f70ecf2d99c45d034dcb1c4d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:41.105844    1678 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.key ...
	I0924 11:19:41.105847    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.key: {Name:mk90b7667329d4fdc145528170a92d5ba0ecf38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:41.105982    1678 certs.go:256] generating profile certs ...
	I0924 11:19:41.106022    1678 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.key
	I0924 11:19:41.106028    1678 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt with IP's: []
	I0924 11:19:41.150875    1678 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt ...
	I0924 11:19:41.150879    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: {Name:mk09484929d0b55a6ee6e89cccd8fa645c8c96ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:41.151016    1678 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.key ...
	I0924 11:19:41.151020    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.key: {Name:mke53c4583810dc045112587a2a2366cc820a5c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:41.151134    1678 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.key.3b1647e9
	I0924 11:19:41.151177    1678 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.crt.3b1647e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0924 11:19:41.248927    1678 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.crt.3b1647e9 ...
	I0924 11:19:41.248933    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.crt.3b1647e9: {Name:mkbf075bdcd534570692d7c7983968d2bd2bf47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:41.249130    1678 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.key.3b1647e9 ...
	I0924 11:19:41.249135    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.key.3b1647e9: {Name:mkc38917a039be141d93d6cd548307c232319c8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:41.249277    1678 certs.go:381] copying /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.crt.3b1647e9 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.crt
	I0924 11:19:41.249511    1678 certs.go:385] copying /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.key.3b1647e9 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.key
	I0924 11:19:41.249650    1678 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/proxy-client.key
	I0924 11:19:41.249668    1678 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/proxy-client.crt with IP's: []
	I0924 11:19:41.329960    1678 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/proxy-client.crt ...
	I0924 11:19:41.329964    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/proxy-client.crt: {Name:mk12fa14d0011de8328f07c80b9c6e58c17c9428 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:41.330155    1678 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/proxy-client.key ...
	I0924 11:19:41.330159    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/proxy-client.key: {Name:mk72bb8c0962901869ead26c1584a75b3e5bdea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:41.330472    1678 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 11:19:41.330501    1678 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem (1078 bytes)
	I0924 11:19:41.330525    1678 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem (1123 bytes)
	I0924 11:19:41.330546    1678 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem (1675 bytes)
	I0924 11:19:41.331018    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 11:19:41.340354    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 11:19:41.349284    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 11:19:41.357260    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 11:19:41.365464    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0924 11:19:41.373741    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 11:19:41.382096    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 11:19:41.390397    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 11:19:41.398964    1678 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 11:19:41.407422    1678 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 11:19:41.414542    1678 ssh_runner.go:195] Run: openssl version
	I0924 11:19:41.416835    1678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 11:19:41.420461    1678 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 11:19:41.422086    1678 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24  2024 /usr/share/ca-certificates/minikubeCA.pem
	I0924 11:19:41.422115    1678 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 11:19:41.424064    1678 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 11:19:41.427983    1678 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 11:19:41.429582    1678 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 11:19:41.429622    1678 kubeadm.go:392] StartCluster: {Name:addons-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:19:41.429703    1678 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0924 11:19:41.435146    1678 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 11:19:41.439214    1678 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 11:19:41.447464    1678 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 11:19:41.451465    1678 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 11:19:41.451473    1678 kubeadm.go:157] found existing configuration files:
	
	I0924 11:19:41.451514    1678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 11:19:41.454873    1678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 11:19:41.454917    1678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 11:19:41.458389    1678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 11:19:41.461480    1678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 11:19:41.461519    1678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 11:19:41.464721    1678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 11:19:41.468163    1678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 11:19:41.468209    1678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 11:19:41.471604    1678 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 11:19:41.475192    1678 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 11:19:41.475237    1678 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 11:19:41.479340    1678 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 11:19:41.499176    1678 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 11:19:41.499222    1678 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 11:19:41.539103    1678 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 11:19:41.539160    1678 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 11:19:41.539207    1678 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 11:19:41.543240    1678 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 11:19:41.559426    1678 out.go:235]   - Generating certificates and keys ...
	I0924 11:19:41.559457    1678 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 11:19:41.559486    1678 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 11:19:41.741395    1678 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 11:19:41.827085    1678 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 11:19:41.883579    1678 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 11:19:41.986990    1678 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 11:19:42.135833    1678 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 11:19:42.135892    1678 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-472000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0924 11:19:42.208808    1678 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 11:19:42.208870    1678 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-472000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0924 11:19:42.300158    1678 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 11:19:42.681286    1678 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 11:19:42.754667    1678 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 11:19:42.754702    1678 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 11:19:42.876679    1678 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 11:19:43.097289    1678 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 11:19:43.168117    1678 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 11:19:43.237414    1678 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 11:19:43.282877    1678 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 11:19:43.283076    1678 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 11:19:43.284250    1678 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 11:19:43.287437    1678 out.go:235]   - Booting up control plane ...
	I0924 11:19:43.287504    1678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 11:19:43.287546    1678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 11:19:43.287582    1678 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 11:19:43.291688    1678 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 11:19:43.294428    1678 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 11:19:43.294462    1678 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 11:19:43.370431    1678 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 11:19:43.370501    1678 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 11:19:43.880065    1678 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.595ms
	I0924 11:19:43.880267    1678 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 11:19:47.393100    1678 kubeadm.go:310] [api-check] The API server is healthy after 3.512768126s
	I0924 11:19:47.418404    1678 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 11:19:47.428701    1678 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 11:19:47.443773    1678 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 11:19:47.444002    1678 kubeadm.go:310] [mark-control-plane] Marking the node addons-472000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 11:19:47.449474    1678 kubeadm.go:310] [bootstrap-token] Using token: hht61v.78dva8178h60qhuj
	I0924 11:19:47.456120    1678 out.go:235]   - Configuring RBAC rules ...
	I0924 11:19:47.456219    1678 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 11:19:47.457492    1678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 11:19:47.464013    1678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 11:19:47.465355    1678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 11:19:47.466721    1678 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 11:19:47.468293    1678 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 11:19:47.807287    1678 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 11:19:48.211551    1678 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 11:19:48.800543    1678 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 11:19:48.801719    1678 kubeadm.go:310] 
	I0924 11:19:48.801804    1678 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 11:19:48.801814    1678 kubeadm.go:310] 
	I0924 11:19:48.801903    1678 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 11:19:48.801910    1678 kubeadm.go:310] 
	I0924 11:19:48.801938    1678 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 11:19:48.802029    1678 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 11:19:48.802086    1678 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 11:19:48.802091    1678 kubeadm.go:310] 
	I0924 11:19:48.802150    1678 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 11:19:48.802155    1678 kubeadm.go:310] 
	I0924 11:19:48.802243    1678 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 11:19:48.802250    1678 kubeadm.go:310] 
	I0924 11:19:48.802306    1678 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 11:19:48.802380    1678 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 11:19:48.802465    1678 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 11:19:48.802473    1678 kubeadm.go:310] 
	I0924 11:19:48.802557    1678 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 11:19:48.802636    1678 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 11:19:48.802641    1678 kubeadm.go:310] 
	I0924 11:19:48.802728    1678 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hht61v.78dva8178h60qhuj \
	I0924 11:19:48.802863    1678 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4250e15ce19ea6ee8d936fb77d1a59ad22f9367fb00a8a9aa9e1b7fb7d1933b3 \
	I0924 11:19:48.802895    1678 kubeadm.go:310] 	--control-plane 
	I0924 11:19:48.802905    1678 kubeadm.go:310] 
	I0924 11:19:48.803060    1678 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 11:19:48.803068    1678 kubeadm.go:310] 
	I0924 11:19:48.803155    1678 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hht61v.78dva8178h60qhuj \
	I0924 11:19:48.803287    1678 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4250e15ce19ea6ee8d936fb77d1a59ad22f9367fb00a8a9aa9e1b7fb7d1933b3 
	I0924 11:19:48.803637    1678 kubeadm.go:310] W0924 18:19:41.033905    1600 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 11:19:48.804000    1678 kubeadm.go:310] W0924 18:19:41.034504    1600 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 11:19:48.804151    1678 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 11:19:48.804172    1678 cni.go:84] Creating CNI manager for ""
	I0924 11:19:48.804190    1678 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 11:19:48.807944    1678 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 11:19:48.814898    1678 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 11:19:48.822194    1678 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 11:19:48.831900    1678 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 11:19:48.832050    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 11:19:48.832050    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-472000 minikube.k8s.io/updated_at=2024_09_24T11_19_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=addons-472000 minikube.k8s.io/primary=true
	I0924 11:19:48.839727    1678 ops.go:34] apiserver oom_adj: -16
	I0924 11:19:48.902911    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 11:19:49.405008    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 11:19:49.905033    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 11:19:50.405003    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 11:19:50.905016    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 11:19:51.404994    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 11:19:51.904983    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 11:19:52.404929    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 11:19:52.904922    1678 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 11:19:52.940764    1678 kubeadm.go:1113] duration metric: took 4.108846667s to wait for elevateKubeSystemPrivileges
	I0924 11:19:52.940777    1678 kubeadm.go:394] duration metric: took 11.511305583s to StartCluster
	I0924 11:19:52.940788    1678 settings.go:142] acquiring lock: {Name:mk8f5a1e4973fb47308ad8c9735bcc716ada1e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:52.941414    1678 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 11:19:52.941600    1678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/kubeconfig: {Name:mk406b8f0f5e016c0aa63af8364801bb91be8bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:52.942106    1678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 11:19:52.942106    1678 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 11:19:52.942134    1678 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0924 11:19:52.942175    1678 addons.go:69] Setting yakd=true in profile "addons-472000"
	I0924 11:19:52.942186    1678 addons.go:234] Setting addon yakd=true in "addons-472000"
	I0924 11:19:52.942199    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942218    1678 config.go:182] Loaded profile config "addons-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 11:19:52.942219    1678 addons.go:69] Setting inspektor-gadget=true in profile "addons-472000"
	I0924 11:19:52.942226    1678 addons.go:234] Setting addon inspektor-gadget=true in "addons-472000"
	I0924 11:19:52.942244    1678 addons.go:69] Setting default-storageclass=true in profile "addons-472000"
	I0924 11:19:52.942246    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942251    1678 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-472000"
	I0924 11:19:52.942294    1678 addons.go:69] Setting volcano=true in profile "addons-472000"
	I0924 11:19:52.942321    1678 addons.go:69] Setting metrics-server=true in profile "addons-472000"
	I0924 11:19:52.942329    1678 addons.go:234] Setting addon volcano=true in "addons-472000"
	I0924 11:19:52.942334    1678 addons.go:234] Setting addon metrics-server=true in "addons-472000"
	I0924 11:19:52.942339    1678 addons.go:69] Setting registry=true in profile "addons-472000"
	I0924 11:19:52.942352    1678 addons.go:234] Setting addon registry=true in "addons-472000"
	I0924 11:19:52.942362    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942371    1678 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-472000"
	I0924 11:19:52.942376    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942383    1678 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-472000"
	I0924 11:19:52.942415    1678 addons.go:69] Setting cloud-spanner=true in profile "addons-472000"
	I0924 11:19:52.942420    1678 addons.go:234] Setting addon cloud-spanner=true in "addons-472000"
	I0924 11:19:52.942428    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942499    1678 retry.go:31] will retry after 1.195333517s: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.942508    1678 addons.go:69] Setting volumesnapshots=true in profile "addons-472000"
	I0924 11:19:52.942511    1678 addons.go:234] Setting addon volumesnapshots=true in "addons-472000"
	I0924 11:19:52.942518    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942529    1678 retry.go:31] will retry after 963.414347ms: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.942638    1678 addons.go:69] Setting gcp-auth=true in profile "addons-472000"
	I0924 11:19:52.942644    1678 mustload.go:65] Loading cluster: addons-472000
	I0924 11:19:52.942714    1678 config.go:182] Loaded profile config "addons-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 11:19:52.942816    1678 retry.go:31] will retry after 885.979931ms: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.942363    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942832    1678 retry.go:31] will retry after 896.470372ms: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.942829    1678 retry.go:31] will retry after 595.244498ms: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.942839    1678 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-472000"
	I0924 11:19:52.942838    1678 addons.go:69] Setting ingress=true in profile "addons-472000"
	I0924 11:19:52.942851    1678 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-472000"
	I0924 11:19:52.942859    1678 addons.go:234] Setting addon ingress=true in "addons-472000"
	I0924 11:19:52.942887    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942294    1678 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-472000"
	I0924 11:19:52.942923    1678 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-472000"
	I0924 11:19:52.942922    1678 retry.go:31] will retry after 1.055361192s: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.942925    1678 retry.go:31] will retry after 1.153040661s: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.942929    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942323    1678 addons.go:69] Setting storage-provisioner=true in profile "addons-472000"
	I0924 11:19:52.943026    1678 addons.go:234] Setting addon storage-provisioner=true in "addons-472000"
	I0924 11:19:52.943035    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942841    1678 addons.go:69] Setting ingress-dns=true in profile "addons-472000"
	I0924 11:19:52.943055    1678 addons.go:234] Setting addon ingress-dns=true in "addons-472000"
	I0924 11:19:52.943080    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.942861    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.943173    1678 retry.go:31] will retry after 935.420775ms: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.943137    1678 retry.go:31] will retry after 1.311357212s: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.943242    1678 retry.go:31] will retry after 1.3817863s: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.943242    1678 retry.go:31] will retry after 973.475336ms: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.943341    1678 retry.go:31] will retry after 967.064821ms: connect: dial unix /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/monitor: connect: connection refused
	I0924 11:19:52.944638    1678 addons.go:234] Setting addon default-storageclass=true in "addons-472000"
	I0924 11:19:52.947041    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:52.946810    1678 out.go:177] * Verifying Kubernetes components...
	I0924 11:19:52.947601    1678 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 11:19:52.950979    1678 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 11:19:52.951003    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:52.954636    1678 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0924 11:19:52.954637    1678 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0924 11:19:52.958818    1678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 11:19:52.962810    1678 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0924 11:19:52.962817    1678 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0924 11:19:52.962825    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:52.970767    1678 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0924 11:19:52.974831    1678 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0924 11:19:52.978771    1678 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0924 11:19:52.982779    1678 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0924 11:19:52.986605    1678 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0924 11:19:52.990771    1678 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0924 11:19:52.991952    1678 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 11:19:52.998690    1678 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0924 11:19:53.002785    1678 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0924 11:19:53.002794    1678 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0924 11:19:53.002806    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:53.063152    1678 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 11:19:53.073381    1678 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0924 11:19:53.073397    1678 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0924 11:19:53.073457    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 11:19:53.109596    1678 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0924 11:19:53.109610    1678 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0924 11:19:53.122434    1678 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0924 11:19:53.122444    1678 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0924 11:19:53.128723    1678 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0924 11:19:53.128739    1678 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0924 11:19:53.135134    1678 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0924 11:19:53.135151    1678 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0924 11:19:53.140572    1678 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0924 11:19:53.140579    1678 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0924 11:19:53.148220    1678 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0924 11:19:53.148233    1678 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0924 11:19:53.156123    1678 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 11:19:53.156133    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0924 11:19:53.173153    1678 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0924 11:19:53.173164    1678 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0924 11:19:53.183139    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 11:19:53.200744    1678 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0924 11:19:53.200755    1678 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0924 11:19:53.222277    1678 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0924 11:19:53.222741    1678 node_ready.go:35] waiting up to 6m0s for node "addons-472000" to be "Ready" ...
	I0924 11:19:53.229641    1678 node_ready.go:49] node "addons-472000" has status "Ready":"True"
	I0924 11:19:53.229659    1678 node_ready.go:38] duration metric: took 6.899417ms for node "addons-472000" to be "Ready" ...
	I0924 11:19:53.229665    1678 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 11:19:53.230564    1678 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0924 11:19:53.230572    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0924 11:19:53.236511    1678 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-472000" in "kube-system" namespace to be "Ready" ...
	I0924 11:19:53.253051    1678 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0924 11:19:53.253062    1678 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0924 11:19:53.259729    1678 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0924 11:19:53.259744    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0924 11:19:53.267766    1678 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0924 11:19:53.267777    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0924 11:19:53.274152    1678 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 11:19:53.274164    1678 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0924 11:19:53.280821    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 11:19:53.541036    1678 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-472000"
	I0924 11:19:53.541057    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:53.545006    1678 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0924 11:19:53.552914    1678 out.go:177]   - Using image docker.io/busybox:stable
	I0924 11:19:53.557001    1678 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 11:19:53.557012    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0924 11:19:53.557023    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:53.629089    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 11:19:53.728942    1678 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-472000" context rescaled to 1 replicas
	I0924 11:19:53.836360    1678 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0924 11:19:53.840535    1678 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0924 11:19:53.847451    1678 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0924 11:19:53.848072    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:19:53.850906    1678 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 11:19:53.850918    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0924 11:19:53.850927    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:53.890307    1678 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 11:19:53.900443    1678 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 11:19:53.903467    1678 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0924 11:19:53.907386    1678 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 11:19:53.907395    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0924 11:19:53.907403    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:53.910457    1678 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0924 11:19:53.914470    1678 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0924 11:19:53.914478    1678 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0924 11:19:53.914481    1678 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0924 11:19:53.914604    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:53.918579    1678 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 11:19:53.918590    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0924 11:19:53.918601    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:53.921462    1678 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0924 11:19:53.924436    1678 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 11:19:53.924448    1678 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 11:19:53.924458    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:53.997907    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 11:19:54.001495    1678 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0924 11:19:54.004473    1678 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0924 11:19:54.004482    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0924 11:19:54.004492    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	W0924 11:19:54.007152    1678 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 11:19:54.007171    1678 retry.go:31] will retry after 293.622727ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 11:19:54.026767    1678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 11:19:54.026778    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0924 11:19:54.060088    1678 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0924 11:19:54.060099    1678 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0924 11:19:54.088679    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 11:19:54.100535    1678 out.go:177]   - Using image docker.io/registry:2.8.3
	I0924 11:19:54.108466    1678 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0924 11:19:54.112395    1678 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0924 11:19:54.112405    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0924 11:19:54.112416    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:54.129666    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 11:19:54.135054    1678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 11:19:54.135068    1678 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 11:19:54.142521    1678 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0924 11:19:54.145464    1678 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0924 11:19:54.145476    1678 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0924 11:19:54.145488    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:54.145780    1678 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0924 11:19:54.145785    1678 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0924 11:19:54.245744    1678 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 11:19:54.245756    1678 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 11:19:54.260527    1678 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0924 11:19:54.261746    1678 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 11:19:54.261752    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0924 11:19:54.261763    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:54.266905    1678 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0924 11:19:54.266917    1678 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0924 11:19:54.272773    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0924 11:19:54.301455    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 11:19:54.311638    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 11:19:54.329511    1678 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 11:19:54.330873    1678 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 11:19:54.330881    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 11:19:54.330891    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:19:54.331135    1678 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0924 11:19:54.331141    1678 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0924 11:19:54.342473    1678 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0924 11:19:54.342486    1678 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0924 11:19:54.385890    1678 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0924 11:19:54.385906    1678 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0924 11:19:54.424982    1678 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0924 11:19:54.424995    1678 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0924 11:19:54.431616    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 11:19:54.456973    1678 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0924 11:19:54.456983    1678 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0924 11:19:54.568842    1678 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0924 11:19:54.568852    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0924 11:19:54.604322    1678 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0924 11:19:54.604340    1678 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0924 11:19:54.645535    1678 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 11:19:54.645545    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0924 11:19:54.646986    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.366162666s)
	I0924 11:19:54.647017    1678 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-472000"
	I0924 11:19:54.647018    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.017929s)
	I0924 11:19:54.653411    1678 out.go:177] * Verifying csi-hostpath-driver addon...
	I0924 11:19:54.659964    1678 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0924 11:19:54.681071    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0924 11:19:54.706254    1678 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0924 11:19:54.706266    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:54.722336    1678 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0924 11:19:54.722347    1678 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0924 11:19:54.750203    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 11:19:54.752357    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 11:19:54.918698    1678 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0924 11:19:54.918709    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0924 11:19:55.165762    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:55.224982    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0924 11:19:55.241481    1678 pod_ready.go:103] pod "etcd-addons-472000" in "kube-system" namespace has status "Ready":"False"
	I0924 11:19:55.675472    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:56.172691    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:56.663328    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:56.740050    1678 pod_ready.go:93] pod "etcd-addons-472000" in "kube-system" namespace has status "Ready":"True"
	I0924 11:19:56.740059    1678 pod_ready.go:82] duration metric: took 3.503582167s for pod "etcd-addons-472000" in "kube-system" namespace to be "Ready" ...
	I0924 11:19:56.740063    1678 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-472000" in "kube-system" namespace to be "Ready" ...
	I0924 11:19:57.168348    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:57.672262    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:57.752148    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.754273125s)
	I0924 11:19:57.752171    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.663528417s)
	I0924 11:19:57.752217    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.622587958s)
	I0924 11:19:57.752224    1678 addons.go:475] Verifying addon ingress=true in "addons-472000"
	I0924 11:19:57.752289    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.45086725s)
	I0924 11:19:57.752253    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.479515042s)
	I0924 11:19:57.752360    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.440755542s)
	I0924 11:19:57.752367    1678 addons.go:475] Verifying addon metrics-server=true in "addons-472000"
	I0924 11:19:57.752387    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.320803709s)
	I0924 11:19:57.752402    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.071360917s)
	I0924 11:19:57.752409    1678 addons.go:475] Verifying addon registry=true in "addons-472000"
	I0924 11:19:57.752483    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.002308292s)
	I0924 11:19:57.752506    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.000178458s)
	I0924 11:19:57.752520    1678 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.527555917s)
	I0924 11:19:57.757308    1678 out.go:177] * Verifying ingress addon...
	I0924 11:19:57.764355    1678 out.go:177] * Verifying registry addon...
	I0924 11:19:57.774321    1678 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-472000 service yakd-dashboard -n yakd-dashboard
	
	I0924 11:19:57.777845    1678 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0924 11:19:57.780765    1678 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0924 11:19:57.782580    1678 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0924 11:19:57.782589    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:19:57.887128    1678 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0924 11:19:57.887138    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:19:58.164521    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:58.282313    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:19:58.282672    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:19:58.664174    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:58.745312    1678 pod_ready.go:103] pod "kube-apiserver-addons-472000" in "kube-system" namespace has status "Ready":"False"
	I0924 11:19:58.782033    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:19:58.782826    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:19:59.164093    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:59.280442    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:19:59.282193    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:19:59.663849    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:19:59.781766    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:19:59.782344    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:00.164816    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:00.282013    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:00.282935    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:00.663839    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:00.745312    1678 pod_ready.go:103] pod "kube-apiserver-addons-472000" in "kube-system" namespace has status "Ready":"False"
	I0924 11:20:00.779959    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:00.782473    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:01.054023    1678 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0924 11:20:01.054040    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:20:01.087043    1678 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0924 11:20:01.095571    1678 addons.go:234] Setting addon gcp-auth=true in "addons-472000"
	I0924 11:20:01.095592    1678 host.go:66] Checking if "addons-472000" exists ...
	I0924 11:20:01.096333    1678 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0924 11:20:01.096342    1678 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/addons-472000/id_rsa Username:docker}
	I0924 11:20:01.129787    1678 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 11:20:01.136768    1678 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0924 11:20:01.141820    1678 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0924 11:20:01.141825    1678 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0924 11:20:01.148837    1678 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0924 11:20:01.148846    1678 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0924 11:20:01.155337    1678 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 11:20:01.155344    1678 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0924 11:20:01.163282    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:01.165096    1678 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 11:20:01.282399    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:01.282591    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:01.388764    1678 addons.go:475] Verifying addon gcp-auth=true in "addons-472000"
	I0924 11:20:01.392924    1678 out.go:177] * Verifying gcp-auth addon...
	I0924 11:20:01.401130    1678 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0924 11:20:01.402337    1678 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 11:20:01.663680    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:01.780848    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:01.782104    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:02.164048    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:02.246811    1678 pod_ready.go:93] pod "kube-apiserver-addons-472000" in "kube-system" namespace has status "Ready":"True"
	I0924 11:20:02.246822    1678 pod_ready.go:82] duration metric: took 5.506825709s for pod "kube-apiserver-addons-472000" in "kube-system" namespace to be "Ready" ...
	I0924 11:20:02.246827    1678 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-472000" in "kube-system" namespace to be "Ready" ...
	I0924 11:20:02.282087    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:02.282862    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:02.665707    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:02.751516    1678 pod_ready.go:93] pod "kube-controller-manager-addons-472000" in "kube-system" namespace has status "Ready":"True"
	I0924 11:20:02.751527    1678 pod_ready.go:82] duration metric: took 504.702459ms for pod "kube-controller-manager-addons-472000" in "kube-system" namespace to be "Ready" ...
	I0924 11:20:02.751532    1678 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qhbt7" in "kube-system" namespace to be "Ready" ...
	I0924 11:20:02.753764    1678 pod_ready.go:93] pod "kube-proxy-qhbt7" in "kube-system" namespace has status "Ready":"True"
	I0924 11:20:02.753769    1678 pod_ready.go:82] duration metric: took 2.234833ms for pod "kube-proxy-qhbt7" in "kube-system" namespace to be "Ready" ...
	I0924 11:20:02.753773    1678 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-472000" in "kube-system" namespace to be "Ready" ...
	I0924 11:20:02.755715    1678 pod_ready.go:93] pod "kube-scheduler-addons-472000" in "kube-system" namespace has status "Ready":"True"
	I0924 11:20:02.755722    1678 pod_ready.go:82] duration metric: took 1.945667ms for pod "kube-scheduler-addons-472000" in "kube-system" namespace to be "Ready" ...
	I0924 11:20:02.755725    1678 pod_ready.go:39] duration metric: took 9.526176542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 11:20:02.755737    1678 api_server.go:52] waiting for apiserver process to appear ...
	I0924 11:20:02.755800    1678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 11:20:02.764890    1678 api_server.go:72] duration metric: took 9.822897959s to wait for apiserver process to appear ...
	I0924 11:20:02.764899    1678 api_server.go:88] waiting for apiserver healthz status ...
	I0924 11:20:02.764908    1678 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0924 11:20:02.767569    1678 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0924 11:20:02.768091    1678 api_server.go:141] control plane version: v1.31.1
	I0924 11:20:02.768099    1678 api_server.go:131] duration metric: took 3.197208ms to wait for apiserver health ...
	I0924 11:20:02.768102    1678 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 11:20:02.772783    1678 system_pods.go:59] 17 kube-system pods found
	I0924 11:20:02.772798    1678 system_pods.go:61] "coredns-7c65d6cfc9-rdmmc" [b2c5b255-aa7e-4835-aef6-74ac11bf66cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 11:20:02.772802    1678 system_pods.go:61] "csi-hostpath-attacher-0" [24d7d711-b2f2-42ea-8248-36a10eddba5d] Running
	I0924 11:20:02.772806    1678 system_pods.go:61] "csi-hostpath-resizer-0" [274a97a6-94b7-463c-9f7f-9483941b0297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 11:20:02.772810    1678 system_pods.go:61] "csi-hostpathplugin-dhzfg" [2066982d-a30a-4f7f-8d29-1350ed8acff9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 11:20:02.772813    1678 system_pods.go:61] "etcd-addons-472000" [23c23d58-5a57-4364-a735-b955242d0c85] Running
	I0924 11:20:02.772817    1678 system_pods.go:61] "kube-apiserver-addons-472000" [62f5c5a3-1c52-4c80-81b8-2b4b0b722361] Running
	I0924 11:20:02.772820    1678 system_pods.go:61] "kube-controller-manager-addons-472000" [da956bd3-a395-4b91-8479-696138e2f9aa] Running
	I0924 11:20:02.772822    1678 system_pods.go:61] "kube-ingress-dns-minikube" [5309b893-b499-4431-9169-983f584ccd7a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0924 11:20:02.772825    1678 system_pods.go:61] "kube-proxy-qhbt7" [62376be1-df2a-443d-9d8b-b43165ac8009] Running
	I0924 11:20:02.772827    1678 system_pods.go:61] "kube-scheduler-addons-472000" [6a3b9458-e977-4569-8732-cdc08fbe690b] Running
	I0924 11:20:02.772830    1678 system_pods.go:61] "metrics-server-84c5f94fbc-x8kns" [6d3ad84c-14ea-4b11-9020-775ab4a507de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 11:20:02.772833    1678 system_pods.go:61] "nvidia-device-plugin-daemonset-8mc94" [4abf47e9-f66b-4179-821a-ab27378421bd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0924 11:20:02.772838    1678 system_pods.go:61] "registry-66c9cd494c-dr9lr" [56205fad-453c-44bc-b682-3000f315999a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0924 11:20:02.772842    1678 system_pods.go:61] "registry-proxy-jxwjp" [c4fce1ef-d25b-40aa-add1-32efc614c74c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0924 11:20:02.772844    1678 system_pods.go:61] "snapshot-controller-56fcc65765-f2zjb" [9023def2-6f40-49f1-9d54-ee0a2992feb8] Running
	I0924 11:20:02.772847    1678 system_pods.go:61] "snapshot-controller-56fcc65765-l9rxm" [207c6283-a250-4692-ba6e-420fca1ddab2] Running
	I0924 11:20:02.772849    1678 system_pods.go:61] "storage-provisioner" [00e9621b-be7c-4260-8c3b-66d44adaf9d4] Running
	I0924 11:20:02.772852    1678 system_pods.go:74] duration metric: took 4.747209ms to wait for pod list to return data ...
	I0924 11:20:02.772856    1678 default_sa.go:34] waiting for default service account to be created ...
	I0924 11:20:02.774483    1678 default_sa.go:45] found service account: "default"
	I0924 11:20:02.774488    1678 default_sa.go:55] duration metric: took 1.630166ms for default service account to be created ...
	I0924 11:20:02.774491    1678 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 11:20:02.781663    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:02.782411    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:02.848591    1678 system_pods.go:86] 17 kube-system pods found
	I0924 11:20:02.848602    1678 system_pods.go:89] "coredns-7c65d6cfc9-rdmmc" [b2c5b255-aa7e-4835-aef6-74ac11bf66cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0924 11:20:02.848622    1678 system_pods.go:89] "csi-hostpath-attacher-0" [24d7d711-b2f2-42ea-8248-36a10eddba5d] Running
	I0924 11:20:02.848626    1678 system_pods.go:89] "csi-hostpath-resizer-0" [274a97a6-94b7-463c-9f7f-9483941b0297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 11:20:02.848629    1678 system_pods.go:89] "csi-hostpathplugin-dhzfg" [2066982d-a30a-4f7f-8d29-1350ed8acff9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 11:20:02.848631    1678 system_pods.go:89] "etcd-addons-472000" [23c23d58-5a57-4364-a735-b955242d0c85] Running
	I0924 11:20:02.848633    1678 system_pods.go:89] "kube-apiserver-addons-472000" [62f5c5a3-1c52-4c80-81b8-2b4b0b722361] Running
	I0924 11:20:02.848635    1678 system_pods.go:89] "kube-controller-manager-addons-472000" [da956bd3-a395-4b91-8479-696138e2f9aa] Running
	I0924 11:20:02.848638    1678 system_pods.go:89] "kube-ingress-dns-minikube" [5309b893-b499-4431-9169-983f584ccd7a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0924 11:20:02.848642    1678 system_pods.go:89] "kube-proxy-qhbt7" [62376be1-df2a-443d-9d8b-b43165ac8009] Running
	I0924 11:20:02.848643    1678 system_pods.go:89] "kube-scheduler-addons-472000" [6a3b9458-e977-4569-8732-cdc08fbe690b] Running
	I0924 11:20:02.848646    1678 system_pods.go:89] "metrics-server-84c5f94fbc-x8kns" [6d3ad84c-14ea-4b11-9020-775ab4a507de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 11:20:02.848649    1678 system_pods.go:89] "nvidia-device-plugin-daemonset-8mc94" [4abf47e9-f66b-4179-821a-ab27378421bd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0924 11:20:02.848653    1678 system_pods.go:89] "registry-66c9cd494c-dr9lr" [56205fad-453c-44bc-b682-3000f315999a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0924 11:20:02.848656    1678 system_pods.go:89] "registry-proxy-jxwjp" [c4fce1ef-d25b-40aa-add1-32efc614c74c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0924 11:20:02.848658    1678 system_pods.go:89] "snapshot-controller-56fcc65765-f2zjb" [9023def2-6f40-49f1-9d54-ee0a2992feb8] Running
	I0924 11:20:02.848660    1678 system_pods.go:89] "snapshot-controller-56fcc65765-l9rxm" [207c6283-a250-4692-ba6e-420fca1ddab2] Running
	I0924 11:20:02.848661    1678 system_pods.go:89] "storage-provisioner" [00e9621b-be7c-4260-8c3b-66d44adaf9d4] Running
	I0924 11:20:02.848665    1678 system_pods.go:126] duration metric: took 74.172042ms to wait for k8s-apps to be running ...
	I0924 11:20:02.848668    1678 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 11:20:02.848722    1678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 11:20:02.855563    1678 system_svc.go:56] duration metric: took 6.890041ms WaitForService to wait for kubelet
	I0924 11:20:02.855574    1678 kubeadm.go:582] duration metric: took 9.913584917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 11:20:02.855583    1678 node_conditions.go:102] verifying NodePressure condition ...
	I0924 11:20:03.044250    1678 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0924 11:20:03.044260    1678 node_conditions.go:123] node cpu capacity is 2
	I0924 11:20:03.044266    1678 node_conditions.go:105] duration metric: took 188.683167ms to run NodePressure ...
	I0924 11:20:03.044272    1678 start.go:241] waiting for startup goroutines ...
	I0924 11:20:03.163918    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:03.281824    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:03.282630    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:03.664353    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:03.782080    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:03.782527    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:04.165383    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:04.282313    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:04.283025    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:04.664535    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:04.783494    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:04.783569    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:05.164004    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:05.281706    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:05.282584    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:05.664416    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:05.780102    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:05.781889    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:06.164201    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:06.281874    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:06.282343    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:06.663876    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:06.781495    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:06.782344    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:07.164299    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:07.281755    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:07.282327    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:07.663956    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:07.782137    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:07.782275    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:08.164600    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:08.282256    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:08.283310    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:08.664527    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:08.781924    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:08.782655    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:09.164214    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:09.281847    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:09.282531    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:09.664252    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:09.782132    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:09.782729    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:10.248943    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:10.281774    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:10.282073    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:10.664438    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:10.781752    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:10.782414    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:11.163798    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:11.281589    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:11.282253    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:11.664205    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:11.782198    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:11.782775    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:12.180143    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:12.281806    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:12.282502    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:12.665410    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:12.782711    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:12.783303    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:13.164329    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:13.281719    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:13.282224    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:13.664359    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:13.782529    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:13.782639    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:14.164133    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:14.281764    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:14.282357    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:14.664605    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:14.783007    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:14.783436    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:15.164272    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:15.281357    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:15.281923    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:15.664458    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:15.781735    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:15.782034    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:16.164008    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:16.281872    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:16.282301    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:16.875573    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:16.875754    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:16.875920    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:17.164180    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:17.285785    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:17.286199    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:17.666220    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:17.782199    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:17.782939    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:18.164171    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:18.282289    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:18.283125    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:18.664017    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:18.782789    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:18.782910    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:19.164978    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:19.281780    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:19.282235    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:19.664084    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:19.779836    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:19.781833    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:20.163847    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:20.349873    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:20.349988    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:20.679230    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:20.782129    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:20.782387    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:21.163626    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:21.283670    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:21.284310    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:21.664267    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:21.781961    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:21.782460    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:22.163894    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:22.281697    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:22.281860    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:22.664232    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:22.781650    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:22.781995    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:23.161753    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:23.280056    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:23.281897    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:23.664435    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:23.783269    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:23.783432    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:24.164078    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:24.282018    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:24.282579    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:24.664762    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:24.781961    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:24.782107    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:25.163678    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:25.281655    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:25.281894    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:25.663268    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:25.923994    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:25.924059    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:26.162172    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:26.281480    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:26.282114    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:26.663972    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:26.781462    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:26.781977    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:27.288625    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:27.288777    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:27.288952    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:27.663906    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:27.780494    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:27.782074    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:28.162608    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:28.281969    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:28.282266    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:28.664081    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:28.781950    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:28.782719    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:29.164731    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:29.281806    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:29.282275    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:29.664013    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:29.781530    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:29.781867    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:30.164049    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:30.281695    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:30.281945    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:30.664073    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:30.781620    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:30.782043    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:31.163914    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:31.281752    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:31.282304    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:31.663985    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:31.781575    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:31.781823    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:32.163999    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:32.281643    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:32.282057    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:32.664483    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:32.782625    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:32.786609    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:33.168325    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:33.283419    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:33.283537    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:33.669339    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:33.782924    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:33.787990    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:34.164499    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:34.281293    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:34.282093    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:34.665055    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:34.784296    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:34.785456    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:35.164206    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:35.286415    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:35.287370    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:35.663730    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:35.781598    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:35.782084    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:36.163918    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:36.281561    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:36.281823    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:36.663735    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:36.781433    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:36.781894    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:37.250929    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:37.350585    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:37.350739    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:37.663673    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:37.779612    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:37.781600    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:38.163826    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:38.281431    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:38.281749    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:38.664064    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:38.782000    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:38.782143    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 11:20:39.163916    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:39.279838    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:39.281616    1678 kapi.go:107] duration metric: took 41.501385417s to wait for kubernetes.io/minikube-addons=registry ...
	I0924 11:20:39.663768    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:39.781732    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:40.163961    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:40.281577    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:40.663628    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:40.781422    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:41.163733    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:41.281828    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:41.674210    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:41.787333    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:42.163668    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:42.281549    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:42.663762    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:42.781403    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:43.162555    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:43.281814    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:43.663742    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:43.781425    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:44.163307    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:44.280042    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:44.662374    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:44.781822    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:45.163488    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:45.281593    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:45.664614    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:45.782505    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:46.165366    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:46.281319    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:46.663699    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:46.781673    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:47.164135    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:47.282218    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:47.667007    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:47.786326    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:48.164920    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:48.281527    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:48.662941    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:48.781353    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:49.163745    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:49.281628    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:49.663930    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:49.782615    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:50.163704    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:50.281963    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:50.663673    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:50.781704    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:51.166257    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:51.281358    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:51.662344    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:51.781466    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:52.163774    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:52.281473    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:52.663807    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:52.779535    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:53.163507    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:53.281056    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:53.663787    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:53.781337    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:54.163784    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:54.281188    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:54.663396    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:54.781200    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:55.163431    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:55.281106    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:55.663719    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:55.780948    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:56.163766    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:56.279860    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:56.665050    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:56.783436    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:57.163954    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:57.280983    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:57.663544    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:57.779619    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:58.163669    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:58.281283    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:58.663438    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:58.781282    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:59.168440    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 11:20:59.281613    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:20:59.669647    1678 kapi.go:107] duration metric: took 1m5.010506916s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0924 11:20:59.784969    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:00.281344    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:00.785057    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:01.281771    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:01.779656    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:02.281676    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:02.779500    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:03.281107    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:03.781125    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:04.281241    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:04.781197    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:05.281071    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:05.781190    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:06.281402    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:06.780109    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:07.282762    1678 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 11:21:07.790403    1678 kapi.go:107] duration metric: took 1m10.013445167s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0924 11:21:23.404478    1678 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 11:21:23.404492    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:23.904778    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:24.404763    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:24.904523    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:25.405097    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:25.906904    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:26.404947    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:26.906736    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:27.408942    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:27.909635    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:28.405261    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:28.909271    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:29.407715    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:29.906772    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:30.405238    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:30.905493    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:31.405885    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:31.904603    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:32.404949    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:32.905044    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:33.404889    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:33.905281    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:34.409328    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:34.903991    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:35.406079    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:35.905910    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:36.404721    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:36.904160    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:37.405332    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:37.908831    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:38.404527    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:38.906173    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:39.406205    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:39.907836    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:40.406606    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:40.904894    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:41.408416    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:41.903675    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:42.404092    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:42.906245    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:43.403643    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:43.905543    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:44.404089    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:44.910921    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:45.406206    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:45.903691    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:46.403614    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:46.905779    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:47.408729    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:47.905258    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:48.404030    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:48.905167    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:49.407040    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:49.905836    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:50.405359    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:50.905367    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:51.406144    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:51.910141    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:52.403727    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:52.905160    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:53.405620    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:53.904738    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:54.403837    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:54.906469    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:55.404757    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:55.905652    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:56.404467    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:56.905191    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:57.403406    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:57.903897    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:58.403135    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:58.907451    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:59.407829    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:21:59.908978    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:00.404306    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:00.906608    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:01.405373    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:01.907207    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:02.404363    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:02.906236    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:03.404316    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:03.908505    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:04.404284    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:04.903413    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:05.404252    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:05.904528    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:06.404696    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:06.904120    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:07.407185    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:07.905904    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:08.404114    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:08.904079    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:09.406296    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:09.909922    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:10.404394    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:10.904498    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:11.407995    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:11.906990    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:12.406442    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:12.905724    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:13.404613    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:13.908124    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:14.404448    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:14.911253    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:15.407599    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:15.905115    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:16.405086    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:16.913633    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:17.407335    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:17.907305    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:18.404474    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:18.904665    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:19.408153    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:19.905634    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:20.408250    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:20.906018    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:21.405060    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:21.905357    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:22.403196    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:22.903030    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:23.403401    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:23.903738    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:24.409615    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:24.903440    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:25.403460    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:25.905491    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:26.409246    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:26.904251    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:27.406625    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:27.903178    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:28.403581    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:28.903042    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:29.401429    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:29.903431    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:30.403450    1678 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 11:22:30.902951    1678 kapi.go:107] duration metric: took 2m29.503742459s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0924 11:22:30.908334    1678 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-472000 cluster.
	I0924 11:22:30.919965    1678 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0924 11:22:30.923291    1678 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0924 11:22:30.926444    1678 out.go:177] * Enabled addons: default-storageclass, storage-provisioner-rancher, volcano, ingress-dns, volumesnapshots, cloud-spanner, metrics-server, nvidia-device-plugin, inspektor-gadget, storage-provisioner, yakd, registry, csi-hostpath-driver, ingress, gcp-auth
	I0924 11:22:30.930235    1678 addons.go:510] duration metric: took 2m37.990138042s for enable addons: enabled=[default-storageclass storage-provisioner-rancher volcano ingress-dns volumesnapshots cloud-spanner metrics-server nvidia-device-plugin inspektor-gadget storage-provisioner yakd registry csi-hostpath-driver ingress gcp-auth]
	I0924 11:22:30.930248    1678 start.go:246] waiting for cluster config update ...
	I0924 11:22:30.930264    1678 start.go:255] writing updated cluster config ...
	I0924 11:22:30.931113    1678 ssh_runner.go:195] Run: rm -f paused
	I0924 11:22:31.087965    1678 start.go:600] kubectl: 1.30.2, cluster: 1.31.1 (minor skew: 1)
	I0924 11:22:31.091380    1678 out.go:177] * Done! kubectl is now configured to use "addons-472000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 24 18:31:55 addons-472000 dockerd[1292]: time="2024-09-24T18:31:55.265473612Z" level=info msg="shim disconnected" id=8fedb2594291d4f3ca09d31f080ff088d5a18b4ae985dd424834ded363222e50 namespace=moby
	Sep 24 18:31:55 addons-472000 dockerd[1292]: time="2024-09-24T18:31:55.265505279Z" level=warning msg="cleaning up after shim disconnected" id=8fedb2594291d4f3ca09d31f080ff088d5a18b4ae985dd424834ded363222e50 namespace=moby
	Sep 24 18:31:55 addons-472000 dockerd[1292]: time="2024-09-24T18:31:55.265509695Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 24 18:32:00 addons-472000 dockerd[1286]: time="2024-09-24T18:32:00.328990419Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a9a2e508b3e54d6e traceID=6960f951776d790a2e92de07bf56035d
	Sep 24 18:32:00 addons-472000 dockerd[1286]: time="2024-09-24T18:32:00.330745605Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a9a2e508b3e54d6e traceID=6960f951776d790a2e92de07bf56035d
	Sep 24 18:32:21 addons-472000 dockerd[1286]: time="2024-09-24T18:32:21.547039794Z" level=info msg="ignoring event" container=cf753dd09b2a1989b8fe48a698991e88eda64b04c13cf85faf7ea8ca9f1181a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.547673870Z" level=info msg="shim disconnected" id=cf753dd09b2a1989b8fe48a698991e88eda64b04c13cf85faf7ea8ca9f1181a0 namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.547711245Z" level=warning msg="cleaning up after shim disconnected" id=cf753dd09b2a1989b8fe48a698991e88eda64b04c13cf85faf7ea8ca9f1181a0 namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.547716161Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1286]: time="2024-09-24T18:32:21.718155682Z" level=info msg="ignoring event" container=fbe92c11fa8b18ef180310055a9ca1780a732f640ad34c203a0e3b7a5ba64eb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.720974023Z" level=info msg="shim disconnected" id=fbe92c11fa8b18ef180310055a9ca1780a732f640ad34c203a0e3b7a5ba64eb8 namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.721010064Z" level=warning msg="cleaning up after shim disconnected" id=fbe92c11fa8b18ef180310055a9ca1780a732f640ad34c203a0e3b7a5ba64eb8 namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.721015898Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.765076781Z" level=info msg="shim disconnected" id=e72741fb3d43cc02dc8a2004870fbd37a26e7649086999187ae2160a34003414 namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1286]: time="2024-09-24T18:32:21.765354695Z" level=info msg="ignoring event" container=e72741fb3d43cc02dc8a2004870fbd37a26e7649086999187ae2160a34003414 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.765420694Z" level=warning msg="cleaning up after shim disconnected" id=e72741fb3d43cc02dc8a2004870fbd37a26e7649086999187ae2160a34003414 namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.765430194Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.818006058Z" level=info msg="shim disconnected" id=7c9b4e3b8d2be7fef8f4c4db526f0949eac0fdb4c92e98548387c8d596d40694 namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.818036016Z" level=warning msg="cleaning up after shim disconnected" id=7c9b4e3b8d2be7fef8f4c4db526f0949eac0fdb4c92e98548387c8d596d40694 namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.818040349Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1286]: time="2024-09-24T18:32:21.818351720Z" level=info msg="ignoring event" container=7c9b4e3b8d2be7fef8f4c4db526f0949eac0fdb4c92e98548387c8d596d40694 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.878427160Z" level=info msg="shim disconnected" id=70d0a522c13367dbfad626b28a4724a51d600e5f39e54d222d9acaa475e7af92 namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1286]: time="2024-09-24T18:32:21.878581700Z" level=info msg="ignoring event" container=70d0a522c13367dbfad626b28a4724a51d600e5f39e54d222d9acaa475e7af92 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.878923321Z" level=warning msg="cleaning up after shim disconnected" id=70d0a522c13367dbfad626b28a4724a51d600e5f39e54d222d9acaa475e7af92 namespace=moby
	Sep 24 18:32:21 addons-472000 dockerd[1292]: time="2024-09-24T18:32:21.878933987Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	7d451c8e1bcdb       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              34 seconds ago      Exited              helper-pod                               0                   7f9ff38976ddc       helper-pod-create-pvc-5d9acef8-c72c-4cb0-b678-9b4ebcfd0da9
	f87e3faee9aa3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            43 seconds ago      Exited              gadget                                   7                   be918bca189e7       gadget-f4td4
	fccd28302eaee       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   a446b2634281c       gcp-auth-89d5ffd79-nlzjr
	91daee582cb2e       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago      Running             controller                               0                   c978f16c8e82c       ingress-nginx-controller-bc57996ff-snzg2
	0f9f73cb30c52       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   2baf239d410d0       csi-hostpathplugin-dhzfg
	d85e9c93c8ba3       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   2baf239d410d0       csi-hostpathplugin-dhzfg
	3e0cb46e51111       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   2baf239d410d0       csi-hostpathplugin-dhzfg
	5f0d7c40efe2b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   2baf239d410d0       csi-hostpathplugin-dhzfg
	03f4dc10bdcab       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   2baf239d410d0       csi-hostpathplugin-dhzfg
	e750a408ddd88       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              patch                                    0                   5e9e1f384f25b       ingress-nginx-admission-patch-zp8ts
	10cc8af3ee57a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              create                                   0                   2b0a09dcd2489       ingress-nginx-admission-create-jbdh4
	e72741fb3d43c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              11 minutes ago      Exited              registry-proxy                           0                   70d0a522c1336       registry-proxy-jxwjp
	fbe92c11fa8b1       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   7c9b4e3b8d2be       registry-66c9cd494c-dr9lr
	665494b77b6dc       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        12 minutes ago      Running             metrics-server                           0                   9183fa41cc48c       metrics-server-84c5f94fbc-x8kns
	b0852bfbd6601       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               12 minutes ago      Running             cloud-spanner-emulator                   0                   399cfd64adf52       cloud-spanner-emulator-5b584cc74-9flrf
	72dbe864d3564       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             12 minutes ago      Running             minikube-ingress-dns                     0                   f0b1972637f1a       kube-ingress-dns-minikube
	bb7e573ed71d2       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              12 minutes ago      Running             csi-resizer                              0                   6422b81d73b03       csi-hostpath-resizer-0
	1095e43824a30       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   12 minutes ago      Running             csi-external-health-monitor-controller   0                   2baf239d410d0       csi-hostpathplugin-dhzfg
	7fb0c632aeab3       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             12 minutes ago      Running             csi-attacher                             0                   b5effc3c7ea1a       csi-hostpath-attacher-0
	ad66b50a74053       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   cbbd62cae7e28       snapshot-controller-56fcc65765-l9rxm
	cf7e4df5dfda1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      12 minutes ago      Running             volume-snapshot-controller               0                   af064f080e268       snapshot-controller-56fcc65765-f2zjb
	45b4bc969fa59       ba04bb24b9575                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   47f39b20b774d       storage-provisioner
	d2b62227255ba       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       12 minutes ago      Running             local-path-provisioner                   0                   8df1059c32f7f       local-path-provisioner-86d989889c-xgmxx
	928dd99ee5965       24a140c548c07                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   f67cb3e564e4f       kube-proxy-qhbt7
	8e25969350930       2f6c962e7b831                                                                                                                                12 minutes ago      Running             coredns                                  0                   1422514c91f3d       coredns-7c65d6cfc9-rdmmc
	ac9a92cd29d69       279f381cb3736                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   8a8b375ebb4ee       kube-controller-manager-addons-472000
	01df9315d309e       27e3830e14027                                                                                                                                12 minutes ago      Running             etcd                                     0                   27546a0fc33c6       etcd-addons-472000
	3618c1b2f0b94       d3f53a98c0a9d                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   3a931057d8c7d       kube-apiserver-addons-472000
	3c836ee290a44       7f8aa378bb47d                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   c0b045712d0cd       kube-scheduler-addons-472000
	
	
	==> controller_ingress [91daee582cb2] <==
	W0924 18:21:06.664094       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0924 18:21:06.664188       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0924 18:21:06.667171       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0924 18:21:06.766114       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0924 18:21:06.776615       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0924 18:21:06.782521       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0924 18:21:06.789614       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c07b7aaa-0c5b-4678-8af9-9e0295c0cd5a", APIVersion:"v1", ResourceVersion:"719", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0924 18:21:06.790431       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"485c7483-1c1a-479f-95c0-7afe7857ec29", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0924 18:21:06.790500       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"737d116b-bb0c-48c7-97ad-37c507be39a2", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0924 18:21:07.984293       7 nginx.go:317] "Starting NGINX process"
	I0924 18:21:07.984494       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0924 18:21:07.984734       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0924 18:21:07.985167       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0924 18:21:08.003644       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0924 18:21:08.003999       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-snzg2"
	I0924 18:21:08.014237       7 controller.go:213] "Backend successfully reloaded"
	I0924 18:21:08.014376       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0924 18:21:08.014560       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-snzg2", UID:"3ba19c7f-9857-4d53-bda2-1ef8b7127f59", APIVersion:"v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0924 18:21:08.105851       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-snzg2" node="addons-472000"
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [8e2596935093] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.14:36342 - 13590 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142792s
	[INFO] 10.244.0.14:36342 - 16144 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00018375s
	[INFO] 10.244.0.14:33285 - 19406 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000048125s
	[INFO] 10.244.0.14:33285 - 47822 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000029208s
	[INFO] 10.244.0.14:39690 - 10815 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004175s
	[INFO] 10.244.0.14:39690 - 36153 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034s
	[INFO] 10.244.0.14:56772 - 51600 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000039791s
	[INFO] 10.244.0.14:56772 - 47249 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000062209s
	[INFO] 10.244.0.14:44083 - 20703 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000034041s
	[INFO] 10.244.0.14:44083 - 62174 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000012542s
	[INFO] 10.244.0.14:55205 - 60358 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00001625s
	[INFO] 10.244.0.14:55205 - 56513 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079125s
	[INFO] 10.244.0.14:39906 - 19897 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000012834s
	[INFO] 10.244.0.14:39906 - 40377 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000016917s
	[INFO] 10.244.0.14:47288 - 14685 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000021417s
	[INFO] 10.244.0.14:47288 - 54876 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000015042s
	[INFO] 10.244.0.24:38731 - 4183 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000122791s
	[INFO] 10.244.0.24:37955 - 39394 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0001435s
	[INFO] 10.244.0.24:58276 - 7373 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00004025s
	[INFO] 10.244.0.24:36606 - 47023 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000030333s
	[INFO] 10.244.0.24:58032 - 49853 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076167s
	[INFO] 10.244.0.24:49124 - 63516 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000045s
	[INFO] 10.244.0.24:37905 - 60328 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002529206s
	[INFO] 10.244.0.24:43269 - 5593 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00256133s
	
	
	==> describe nodes <==
	Name:               addons-472000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-472000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=addons-472000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T11_19_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-472000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-472000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:19:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-472000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:32:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:31:21 +0000   Tue, 24 Sep 2024 18:19:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:31:21 +0000   Tue, 24 Sep 2024 18:19:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:31:21 +0000   Tue, 24 Sep 2024 18:19:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:31:21 +0000   Tue, 24 Sep 2024 18:19:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-472000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 04babbaa80c84efa92f227140154f3f3
	  System UUID:                04babbaa80c84efa92f227140154f3f3
	  Boot ID:                    da6e222e-e5f4-4831-9c22-fc436386cadc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-5b584cc74-9flrf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-f4td4                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-nlzjr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-snzg2    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-rdmmc                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-dhzfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-472000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-472000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-472000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-qhbt7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-472000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-x8kns             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-f2zjb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-l9rxm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-xgmxx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-472000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-472000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-472000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-472000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-472000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-472000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-472000 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-472000 event: Registered Node addons-472000 in Controller
	
	
	==> dmesg <==
	[  +0.050525] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.165999] kauditd_printk_skb: 291 callbacks suppressed
	[Sep24 18:20] kauditd_printk_skb: 35 callbacks suppressed
	[ +26.393437] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.177915] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.724600] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.057067] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.889923] kauditd_printk_skb: 26 callbacks suppressed
	[Sep24 18:21] kauditd_printk_skb: 32 callbacks suppressed
	[ +14.330389] kauditd_printk_skb: 16 callbacks suppressed
	[Sep24 18:22] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.565348] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.330479] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.712199] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.321649] kauditd_printk_skb: 2 callbacks suppressed
	[Sep24 18:23] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.585670] kauditd_printk_skb: 2 callbacks suppressed
	[Sep24 18:26] kauditd_printk_skb: 2 callbacks suppressed
	[Sep24 18:31] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.156448] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.073839] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.324499] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.285055] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.527488] kauditd_printk_skb: 23 callbacks suppressed
	[Sep24 18:32] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [01df9315d309] <==
	{"level":"info","ts":"2024-09-24T18:20:10.335106Z","caller":"traceutil/trace.go:171","msg":"trace[636399960] transaction","detail":"{read_only:false; response_revision:1018; number_of_response:1; }","duration":"103.618852ms","start":"2024-09-24T18:20:10.231479Z","end":"2024-09-24T18:20:10.335098Z","steps":["trace[636399960] 'process raft request'  (duration: 103.546894ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:20:16.962353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.944664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:20:16.962489Z","caller":"traceutil/trace.go:171","msg":"trace[1720797170] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1036; }","duration":"211.268414ms","start":"2024-09-24T18:20:16.751214Z","end":"2024-09-24T18:20:16.962482Z","steps":["trace[1720797170] 'range keys from in-memory index tree'  (duration: 210.904581ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:20:20.766261Z","caller":"traceutil/trace.go:171","msg":"trace[1325124353] transaction","detail":"{read_only:false; response_revision:1053; number_of_response:1; }","duration":"213.396455ms","start":"2024-09-24T18:20:20.552856Z","end":"2024-09-24T18:20:20.766252Z","steps":["trace[1325124353] 'process raft request'  (duration: 212.751205ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:20:26.010179Z","caller":"traceutil/trace.go:171","msg":"trace[1961074505] linearizableReadLoop","detail":"{readStateIndex:1098; appliedIndex:1097; }","duration":"140.887638ms","start":"2024-09-24T18:20:25.869284Z","end":"2024-09-24T18:20:26.010172Z","steps":["trace[1961074505] 'read index received'  (duration: 140.817888ms)","trace[1961074505] 'applied index is now lower than readState.Index'  (duration: 69.583µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:20:26.010257Z","caller":"traceutil/trace.go:171","msg":"trace[1355171353] transaction","detail":"{read_only:false; response_revision:1076; number_of_response:1; }","duration":"205.799207ms","start":"2024-09-24T18:20:25.804454Z","end":"2024-09-24T18:20:26.010253Z","steps":["trace[1355171353] 'process raft request'  (duration: 205.668124ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:20:26.010325Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.04847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:20:26.010337Z","caller":"traceutil/trace.go:171","msg":"trace[1322193218] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1076; }","duration":"141.067596ms","start":"2024-09-24T18:20:25.869266Z","end":"2024-09-24T18:20:26.010333Z","steps":["trace[1322193218] 'agreement among raft nodes before linearized reading'  (duration: 141.041971ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:20:26.010376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.273263ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:20:26.010383Z","caller":"traceutil/trace.go:171","msg":"trace[461321119] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1076; }","duration":"140.280887ms","start":"2024-09-24T18:20:25.870100Z","end":"2024-09-24T18:20:26.010381Z","steps":["trace[461321119] 'agreement among raft nodes before linearized reading'  (duration: 140.270304ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:20:27.375602Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.835558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:20:27.375628Z","caller":"traceutil/trace.go:171","msg":"trace[831397448] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1079; }","duration":"123.866891ms","start":"2024-09-24T18:20:27.251756Z","end":"2024-09-24T18:20:27.375622Z","steps":["trace[831397448] 'range keys from in-memory index tree'  (duration: 123.814224ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:20:37.337128Z","caller":"traceutil/trace.go:171","msg":"trace[2125093626] linearizableReadLoop","detail":"{readStateIndex:1147; appliedIndex:1146; }","duration":"226.515204ms","start":"2024-09-24T18:20:37.110605Z","end":"2024-09-24T18:20:37.337120Z","steps":["trace[2125093626] 'read index received'  (duration: 226.424579ms)","trace[2125093626] 'applied index is now lower than readState.Index'  (duration: 90.375µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:20:37.337263Z","caller":"traceutil/trace.go:171","msg":"trace[303229524] transaction","detail":"{read_only:false; response_revision:1123; number_of_response:1; }","duration":"248.420866ms","start":"2024-09-24T18:20:37.088838Z","end":"2024-09-24T18:20:37.337259Z","steps":["trace[303229524] 'process raft request'  (duration: 248.224866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:20:37.337334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.723371ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:20:37.337352Z","caller":"traceutil/trace.go:171","msg":"trace[2049414854] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1123; }","duration":"226.747287ms","start":"2024-09-24T18:20:37.110602Z","end":"2024-09-24T18:20:37.337349Z","steps":["trace[2049414854] 'agreement among raft nodes before linearized reading'  (duration: 226.71837ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:20:37.337559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.0195ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.105.2\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-09-24T18:20:37.337586Z","caller":"traceutil/trace.go:171","msg":"trace[1926672835] range","detail":"{range_begin:/registry/masterleases/192.168.105.2; range_end:; response_count:1; response_revision:1123; }","duration":"206.050249ms","start":"2024-09-24T18:20:37.131533Z","end":"2024-09-24T18:20:37.337583Z","steps":["trace[1926672835] 'agreement among raft nodes before linearized reading'  (duration: 205.880249ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:20:37.337703Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.832547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:20:37.337718Z","caller":"traceutil/trace.go:171","msg":"trace[1279771462] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1123; }","duration":"177.848131ms","start":"2024-09-24T18:20:37.159868Z","end":"2024-09-24T18:20:37.337716Z","steps":["trace[1279771462] 'agreement among raft nodes before linearized reading'  (duration: 177.828589ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:22:53.778886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.207041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-09-24T18:22:53.778953Z","caller":"traceutil/trace.go:171","msg":"trace[852606154] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1563; }","duration":"121.28275ms","start":"2024-09-24T18:22:53.657663Z","end":"2024-09-24T18:22:53.778946Z","steps":["trace[852606154] 'range keys from in-memory index tree'  (duration: 121.144375ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:29:45.471818Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1893}
	{"level":"info","ts":"2024-09-24T18:29:45.571112Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1893,"took":"97.984791ms","hash":861717282,"current-db-size-bytes":9019392,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":5001216,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-24T18:29:45.571611Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":861717282,"revision":1893,"compact-revision":-1}
	
	
	==> gcp-auth [fccd28302eae] <==
	2024/09/24 18:22:30 GCP Auth Webhook started!
	2024/09/24 18:22:46 Ready to marshal response ...
	2024/09/24 18:22:46 Ready to write response ...
	2024/09/24 18:22:47 Ready to marshal response ...
	2024/09/24 18:22:47 Ready to write response ...
	2024/09/24 18:23:10 Ready to marshal response ...
	2024/09/24 18:23:10 Ready to write response ...
	2024/09/24 18:23:10 Ready to marshal response ...
	2024/09/24 18:23:10 Ready to write response ...
	2024/09/24 18:23:10 Ready to marshal response ...
	2024/09/24 18:23:10 Ready to write response ...
	2024/09/24 18:31:11 Ready to marshal response ...
	2024/09/24 18:31:11 Ready to write response ...
	2024/09/24 18:31:11 Ready to marshal response ...
	2024/09/24 18:31:11 Ready to write response ...
	2024/09/24 18:31:11 Ready to marshal response ...
	2024/09/24 18:31:11 Ready to write response ...
	2024/09/24 18:31:21 Ready to marshal response ...
	2024/09/24 18:31:21 Ready to write response ...
	2024/09/24 18:31:46 Ready to marshal response ...
	2024/09/24 18:31:46 Ready to write response ...
	2024/09/24 18:31:46 Ready to marshal response ...
	2024/09/24 18:31:46 Ready to write response ...
	2024/09/24 18:31:55 Ready to marshal response ...
	2024/09/24 18:31:55 Ready to write response ...
	
	
	==> kernel <==
	 18:32:22 up 12 min,  0 users,  load average: 0.51, 0.60, 0.48
	Linux addons-472000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3618c1b2f0b9] <==
	W0924 18:22:04.512994       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.135.184:443: connect: connection refused
	E0924 18:22:04.513016       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.135.184:443: connect: connection refused" logger="UnhandledError"
	I0924 18:22:46.387141       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0924 18:22:46.397440       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0924 18:22:59.745449       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0924 18:22:59.796069       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0924 18:22:59.904004       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0924 18:22:59.917853       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0924 18:22:59.924770       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0924 18:23:00.129877       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0924 18:23:00.129900       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0924 18:23:00.151349       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0924 18:23:00.296366       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0924 18:23:00.822499       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0924 18:23:00.952915       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0924 18:23:01.143687       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0924 18:23:01.183355       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0924 18:23:01.191939       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0924 18:23:01.297569       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W0924 18:23:01.301018       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	I0924 18:31:11.569865       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.246.85"}
	E0924 18:31:56.823368       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:56.844796       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:31:56.853839       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0924 18:32:11.860022       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [ac9a92cd29d6] <==
	I0924 18:31:21.743545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-472000"
	I0924 18:31:24.815492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="3.666µs"
	W0924 18:31:27.122735       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:27.124090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0924 18:31:34.903751       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0924 18:31:35.141887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="2.667µs"
	W0924 18:31:40.729899       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:40.729944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:45.023431       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:45.023475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0924 18:31:45.179145       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0924 18:31:45.515340       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:45.515445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:51.226690       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:51.226886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0924 18:31:56.052865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="2.709µs"
	W0924 18:31:58.743826       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:58.743931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:32:13.627608       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:32:13.627707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:32:20.848310       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:32:20.848609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:32:21.495493       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:32:21.495529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0924 18:32:21.691701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.916µs"
	
	
	==> kube-proxy [928dd99ee596] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:19:55.364226       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 18:19:55.388082       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0924 18:19:55.388117       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:19:55.461547       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:19:55.461568       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:19:55.461582       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:19:55.464234       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:19:55.464383       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:19:55.464390       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:19:55.465724       1 config.go:199] "Starting service config controller"
	I0924 18:19:55.467126       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:19:55.467172       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:19:55.467205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:19:55.467773       1 config.go:328] "Starting node config controller"
	I0924 18:19:55.468841       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:19:55.568118       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:19:55.568142       1 shared_informer.go:320] Caches are synced for service config
	I0924 18:19:55.571315       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c836ee290a4] <==
	W0924 18:19:45.421808       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 18:19:45.421815       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:45.421725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 18:19:45.421849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:45.421517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 18:19:45.421956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:45.421468       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 18:19:45.421970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:45.421741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 18:19:45.422031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:45.421453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 18:19:45.422072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:46.249928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 18:19:46.249962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:46.254563       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 18:19:46.254574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:46.289288       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 18:19:46.289327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:46.389469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:19:46.389555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:46.443536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 18:19:46.443616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 18:19:46.456449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:19:46.456471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0924 18:19:47.018424       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 18:32:01 addons-472000 kubelet[2063]: I0924 18:32:01.420666    2063 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/57a4bf4b-cf88-41bc-a601-0ff55484af1e-data\") on node \"addons-472000\" DevicePath \"\""
	Sep 24 18:32:01 addons-472000 kubelet[2063]: I0924 18:32:01.420680    2063 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/57a4bf4b-cf88-41bc-a601-0ff55484af1e-gcp-creds\") on node \"addons-472000\" DevicePath \"\""
	Sep 24 18:32:02 addons-472000 kubelet[2063]: I0924 18:32:02.328491    2063 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2dlwc\" (UniqueName: \"kubernetes.io/projected/57a4bf4b-cf88-41bc-a601-0ff55484af1e-kube-api-access-2dlwc\") on node \"addons-472000\" DevicePath \"\""
	Sep 24 18:32:02 addons-472000 kubelet[2063]: I0924 18:32:02.328507    2063 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/57a4bf4b-cf88-41bc-a601-0ff55484af1e-script\") on node \"addons-472000\" DevicePath \"\""
	Sep 24 18:32:04 addons-472000 kubelet[2063]: I0924 18:32:04.167803    2063 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a4bf4b-cf88-41bc-a601-0ff55484af1e" path="/var/lib/kubelet/pods/57a4bf4b-cf88-41bc-a601-0ff55484af1e/volumes"
	Sep 24 18:32:06 addons-472000 kubelet[2063]: E0924 18:32:06.162435    2063 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="f46bcd30-4292-41a8-b552-64d52343f834"
	Sep 24 18:32:10 addons-472000 kubelet[2063]: I0924 18:32:10.169146    2063 scope.go:117] "RemoveContainer" containerID="f87e3faee9aa35acada99d1dc194cebfb1952df59f45b1ba65359ac2f355d004"
	Sep 24 18:32:10 addons-472000 kubelet[2063]: E0924 18:32:10.170352    2063 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-f4td4_gadget(26314592-c8df-425c-ac98-3ec30d5edc99)\"" pod="gadget/gadget-f4td4" podUID="26314592-c8df-425c-ac98-3ec30d5edc99"
	Sep 24 18:32:13 addons-472000 kubelet[2063]: E0924 18:32:13.161616    2063 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="a6c7dea8-213c-47ab-8aef-6bb2f332cd98"
	Sep 24 18:32:17 addons-472000 kubelet[2063]: E0924 18:32:17.164886    2063 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="f46bcd30-4292-41a8-b552-64d52343f834"
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.158982    2063 scope.go:117] "RemoveContainer" containerID="f87e3faee9aa35acada99d1dc194cebfb1952df59f45b1ba65359ac2f355d004"
	Sep 24 18:32:21 addons-472000 kubelet[2063]: E0924 18:32:21.159425    2063 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-f4td4_gadget(26314592-c8df-425c-ac98-3ec30d5edc99)\"" pod="gadget/gadget-f4td4" podUID="26314592-c8df-425c-ac98-3ec30d5edc99"
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.649374    2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a6c7dea8-213c-47ab-8aef-6bb2f332cd98-gcp-creds\") pod \"a6c7dea8-213c-47ab-8aef-6bb2f332cd98\" (UID: \"a6c7dea8-213c-47ab-8aef-6bb2f332cd98\") "
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.649469    2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twgmc\" (UniqueName: \"kubernetes.io/projected/a6c7dea8-213c-47ab-8aef-6bb2f332cd98-kube-api-access-twgmc\") pod \"a6c7dea8-213c-47ab-8aef-6bb2f332cd98\" (UID: \"a6c7dea8-213c-47ab-8aef-6bb2f332cd98\") "
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.649934    2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c7dea8-213c-47ab-8aef-6bb2f332cd98-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "a6c7dea8-213c-47ab-8aef-6bb2f332cd98" (UID: "a6c7dea8-213c-47ab-8aef-6bb2f332cd98"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.661465    2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c7dea8-213c-47ab-8aef-6bb2f332cd98-kube-api-access-twgmc" (OuterVolumeSpecName: "kube-api-access-twgmc") pod "a6c7dea8-213c-47ab-8aef-6bb2f332cd98" (UID: "a6c7dea8-213c-47ab-8aef-6bb2f332cd98"). InnerVolumeSpecName "kube-api-access-twgmc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.749931    2063 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a6c7dea8-213c-47ab-8aef-6bb2f332cd98-gcp-creds\") on node \"addons-472000\" DevicePath \"\""
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.749946    2063 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-twgmc\" (UniqueName: \"kubernetes.io/projected/a6c7dea8-213c-47ab-8aef-6bb2f332cd98-kube-api-access-twgmc\") on node \"addons-472000\" DevicePath \"\""
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.953885    2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vd9l\" (UniqueName: \"kubernetes.io/projected/56205fad-453c-44bc-b682-3000f315999a-kube-api-access-7vd9l\") pod \"56205fad-453c-44bc-b682-3000f315999a\" (UID: \"56205fad-453c-44bc-b682-3000f315999a\") "
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.953924    2063 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqsxx\" (UniqueName: \"kubernetes.io/projected/c4fce1ef-d25b-40aa-add1-32efc614c74c-kube-api-access-sqsxx\") pod \"c4fce1ef-d25b-40aa-add1-32efc614c74c\" (UID: \"c4fce1ef-d25b-40aa-add1-32efc614c74c\") "
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.954582    2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fce1ef-d25b-40aa-add1-32efc614c74c-kube-api-access-sqsxx" (OuterVolumeSpecName: "kube-api-access-sqsxx") pod "c4fce1ef-d25b-40aa-add1-32efc614c74c" (UID: "c4fce1ef-d25b-40aa-add1-32efc614c74c"). InnerVolumeSpecName "kube-api-access-sqsxx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:32:21 addons-472000 kubelet[2063]: I0924 18:32:21.955085    2063 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56205fad-453c-44bc-b682-3000f315999a-kube-api-access-7vd9l" (OuterVolumeSpecName: "kube-api-access-7vd9l") pod "56205fad-453c-44bc-b682-3000f315999a" (UID: "56205fad-453c-44bc-b682-3000f315999a"). InnerVolumeSpecName "kube-api-access-7vd9l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:32:22 addons-472000 kubelet[2063]: I0924 18:32:22.054021    2063 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7vd9l\" (UniqueName: \"kubernetes.io/projected/56205fad-453c-44bc-b682-3000f315999a-kube-api-access-7vd9l\") on node \"addons-472000\" DevicePath \"\""
	Sep 24 18:32:22 addons-472000 kubelet[2063]: I0924 18:32:22.054039    2063 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sqsxx\" (UniqueName: \"kubernetes.io/projected/c4fce1ef-d25b-40aa-add1-32efc614c74c-kube-api-access-sqsxx\") on node \"addons-472000\" DevicePath \"\""
	Sep 24 18:32:22 addons-472000 kubelet[2063]: I0924 18:32:22.165425    2063 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c7dea8-213c-47ab-8aef-6bb2f332cd98" path="/var/lib/kubelet/pods/a6c7dea8-213c-47ab-8aef-6bb2f332cd98/volumes"
	
	
	==> storage-provisioner [45b4bc969fa5] <==
	I0924 18:19:57.509109       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 18:19:57.541760       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 18:19:57.569869       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 18:19:57.638163       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 18:19:57.638250       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-472000_72702521-c366-4256-a6d8-2951a5a5a7cf!
	I0924 18:19:57.638713       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd6ebba5-7458-4a2c-9ab1-503ebf8cd875", APIVersion:"v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-472000_72702521-c366-4256-a6d8-2951a5a5a7cf became leader
	I0924 18:19:57.744721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-472000_72702521-c366-4256-a6d8-2951a5a5a7cf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-472000 -n addons-472000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-472000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-jbdh4 ingress-nginx-admission-patch-zp8ts registry-66c9cd494c-dr9lr registry-proxy-jxwjp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-472000 describe pod busybox ingress-nginx-admission-create-jbdh4 ingress-nginx-admission-patch-zp8ts registry-66c9cd494c-dr9lr registry-proxy-jxwjp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-472000 describe pod busybox ingress-nginx-admission-create-jbdh4 ingress-nginx-admission-patch-zp8ts registry-66c9cd494c-dr9lr registry-proxy-jxwjp: exit status 1 (44.735458ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-472000/192.168.105.2
	Start Time:       Tue, 24 Sep 2024 11:23:10 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bk6wh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bk6wh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned default/busybox to addons-472000
	  Normal   Pulling    7m39s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m39s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m39s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m23s (x6 over 9m11s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m9s (x20 over 9m11s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jbdh4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zp8ts" not found
	Error from server (NotFound): pods "registry-66c9cd494c-dr9lr" not found
	Error from server (NotFound): pods "registry-proxy-jxwjp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-472000 describe pod busybox ingress-nginx-admission-create-jbdh4 ingress-nginx-admission-patch-zp8ts registry-66c9cd494c-dr9lr registry-proxy-jxwjp: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.37s)

                                                
                                    
x
+
TestCertOptions (10.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-628000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-628000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.862215625s)

                                                
                                                
-- stdout --
	* [cert-options-628000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-628000" primary control-plane node in "cert-options-628000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-628000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-628000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-628000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-628000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-628000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.811375ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-628000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-628000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-628000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-628000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-628000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-628000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.56275ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-628000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-628000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-628000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-628000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-628000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-24 12:07:44.658964 -0700 PDT m=+2940.594733001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-628000 -n cert-options-628000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-628000 -n cert-options-628000: exit status 7 (31.416791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-628000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-628000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-628000
--- FAIL: TestCertOptions (10.13s)

                                                
                                    
x
+
TestCertExpiration (195.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-844000 --memory=2048 --cert-expiration=3m --driver=qemu2 
E0924 12:07:31.150263    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 12:07:32.034246    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-844000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.076289209s)

                                                
                                                
-- stdout --
	* [cert-expiration-844000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-844000" primary control-plane node in "cert-expiration-844000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-844000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-844000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-844000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-844000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-844000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.216776917s)

                                                
                                                
-- stdout --
	* [cert-expiration-844000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-844000" primary control-plane node in "cert-expiration-844000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-844000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-844000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-844000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-844000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-844000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-844000" primary control-plane node in "cert-expiration-844000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-844000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-844000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-844000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-24 12:10:44.765687 -0700 PDT m=+3120.702377876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-844000 -n cert-expiration-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-844000 -n cert-expiration-844000: exit status 7 (59.998459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-844000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-844000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-844000
--- FAIL: TestCertExpiration (195.44s)

                                                
                                    
x
+
TestDockerFlags (10.32s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-217000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-217000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.085384083s)

                                                
                                                
-- stdout --
	* [docker-flags-217000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-217000" primary control-plane node in "docker-flags-217000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-217000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:07:24.352273    4294 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:07:24.352420    4294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:07:24.352424    4294 out.go:358] Setting ErrFile to fd 2...
	I0924 12:07:24.352426    4294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:07:24.352575    4294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:07:24.353540    4294 out.go:352] Setting JSON to false
	I0924 12:07:24.369508    4294 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4015,"bootTime":1727200829,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:07:24.369574    4294 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:07:24.373398    4294 out.go:177] * [docker-flags-217000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:07:24.380360    4294 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:07:24.380401    4294 notify.go:220] Checking for updates...
	I0924 12:07:24.387382    4294 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:07:24.390357    4294 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:07:24.393327    4294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:07:24.396322    4294 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:07:24.397682    4294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:07:24.400642    4294 config.go:182] Loaded profile config "force-systemd-flag-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:07:24.400709    4294 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:07:24.400761    4294 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:07:24.405287    4294 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:07:24.410353    4294 start.go:297] selected driver: qemu2
	I0924 12:07:24.410360    4294 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:07:24.410367    4294 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:07:24.412557    4294 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:07:24.415308    4294 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:07:24.418428    4294 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0924 12:07:24.418453    4294 cni.go:84] Creating CNI manager for ""
	I0924 12:07:24.418476    4294 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:07:24.418481    4294 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:07:24.418507    4294 start.go:340] cluster config:
	{Name:docker-flags-217000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-217000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:07:24.422131    4294 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:07:24.429403    4294 out.go:177] * Starting "docker-flags-217000" primary control-plane node in "docker-flags-217000" cluster
	I0924 12:07:24.433326    4294 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:07:24.433342    4294 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:07:24.433348    4294 cache.go:56] Caching tarball of preloaded images
	I0924 12:07:24.433412    4294 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:07:24.433417    4294 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:07:24.433478    4294 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/docker-flags-217000/config.json ...
	I0924 12:07:24.433496    4294 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/docker-flags-217000/config.json: {Name:mk090f80564ba729d6a8cbcc9bdf1778cc9755b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:07:24.433712    4294 start.go:360] acquireMachinesLock for docker-flags-217000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:07:24.433747    4294 start.go:364] duration metric: took 27.333µs to acquireMachinesLock for "docker-flags-217000"
	I0924 12:07:24.433760    4294 start.go:93] Provisioning new machine with config: &{Name:docker-flags-217000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-217000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:07:24.433786    4294 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:07:24.442320    4294 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0924 12:07:24.460446    4294 start.go:159] libmachine.API.Create for "docker-flags-217000" (driver="qemu2")
	I0924 12:07:24.460474    4294 client.go:168] LocalClient.Create starting
	I0924 12:07:24.460536    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:07:24.460566    4294 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:24.460575    4294 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:24.460613    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:07:24.460636    4294 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:24.460644    4294 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:24.460987    4294 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:07:24.621719    4294 main.go:141] libmachine: Creating SSH key...
	I0924 12:07:24.715686    4294 main.go:141] libmachine: Creating Disk image...
	I0924 12:07:24.715692    4294 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:07:24.715881    4294 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2
	I0924 12:07:24.725045    4294 main.go:141] libmachine: STDOUT: 
	I0924 12:07:24.725069    4294 main.go:141] libmachine: STDERR: 
	I0924 12:07:24.725133    4294 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2 +20000M
	I0924 12:07:24.732956    4294 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:07:24.732974    4294 main.go:141] libmachine: STDERR: 
	I0924 12:07:24.732989    4294 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2
	I0924 12:07:24.732993    4294 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:07:24.733007    4294 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:07:24.733035    4294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:d0:37:5b:ee:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2
	I0924 12:07:24.734631    4294 main.go:141] libmachine: STDOUT: 
	I0924 12:07:24.734645    4294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:07:24.734664    4294 client.go:171] duration metric: took 274.183875ms to LocalClient.Create
	I0924 12:07:26.736884    4294 start.go:128] duration metric: took 2.303087209s to createHost
	I0924 12:07:26.736938    4294 start.go:83] releasing machines lock for "docker-flags-217000", held for 2.303194084s
	W0924 12:07:26.736992    4294 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:26.748293    4294 out.go:177] * Deleting "docker-flags-217000" in qemu2 ...
	W0924 12:07:26.782758    4294 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:26.782793    4294 start.go:729] Will try again in 5 seconds ...
	I0924 12:07:31.784944    4294 start.go:360] acquireMachinesLock for docker-flags-217000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:07:31.931457    4294 start.go:364] duration metric: took 146.379ms to acquireMachinesLock for "docker-flags-217000"
	I0924 12:07:31.931554    4294 start.go:93] Provisioning new machine with config: &{Name:docker-flags-217000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-217000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:07:31.931849    4294 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:07:31.947482    4294 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0924 12:07:31.995596    4294 start.go:159] libmachine.API.Create for "docker-flags-217000" (driver="qemu2")
	I0924 12:07:31.995645    4294 client.go:168] LocalClient.Create starting
	I0924 12:07:31.995802    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:07:31.995865    4294 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:31.995884    4294 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:31.995950    4294 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:07:31.995994    4294 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:31.996011    4294 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:31.996563    4294 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:07:32.176895    4294 main.go:141] libmachine: Creating SSH key...
	I0924 12:07:32.336118    4294 main.go:141] libmachine: Creating Disk image...
	I0924 12:07:32.336124    4294 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:07:32.336334    4294 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2
	I0924 12:07:32.345902    4294 main.go:141] libmachine: STDOUT: 
	I0924 12:07:32.345919    4294 main.go:141] libmachine: STDERR: 
	I0924 12:07:32.345981    4294 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2 +20000M
	I0924 12:07:32.353832    4294 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:07:32.353847    4294 main.go:141] libmachine: STDERR: 
	I0924 12:07:32.353856    4294 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2
	I0924 12:07:32.353861    4294 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:07:32.353872    4294 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:07:32.353906    4294 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:81:52:6a:45:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/docker-flags-217000/disk.qcow2
	I0924 12:07:32.355492    4294 main.go:141] libmachine: STDOUT: 
	I0924 12:07:32.355504    4294 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:07:32.355518    4294 client.go:171] duration metric: took 359.870292ms to LocalClient.Create
	I0924 12:07:34.356044    4294 start.go:128] duration metric: took 2.424153291s to createHost
	I0924 12:07:34.356139    4294 start.go:83] releasing machines lock for "docker-flags-217000", held for 2.424649125s
	W0924 12:07:34.356602    4294 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-217000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-217000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:34.378694    4294 out.go:201] 
	W0924 12:07:34.383692    4294 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:07:34.383726    4294 out.go:270] * 
	* 
	W0924 12:07:34.386371    4294 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:07:34.395448    4294 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-217000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-217000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-217000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.679625ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-217000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-217000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-217000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-217000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-217000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-217000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-217000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-217000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-217000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.758875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-217000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-217000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-217000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-217000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-217000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-217000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-24 12:07:34.535594 -0700 PDT m=+2930.471310709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-217000 -n docker-flags-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-217000 -n docker-flags-217000: exit status 7 (29.744833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-217000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-217000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-217000
--- FAIL: TestDockerFlags (10.32s)

                                                
                                    
x
+
TestForceSystemdFlag (10.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-171000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-171000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.959306666s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-171000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-171000" primary control-plane node in "force-systemd-flag-171000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-171000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:07:19.340369    4271 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:07:19.340496    4271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:07:19.340499    4271 out.go:358] Setting ErrFile to fd 2...
	I0924 12:07:19.340502    4271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:07:19.340649    4271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:07:19.341686    4271 out.go:352] Setting JSON to false
	I0924 12:07:19.358015    4271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4010,"bootTime":1727200829,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:07:19.358089    4271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:07:19.365645    4271 out.go:177] * [force-systemd-flag-171000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:07:19.383739    4271 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:07:19.383752    4271 notify.go:220] Checking for updates...
	I0924 12:07:19.395590    4271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:07:19.399603    4271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:07:19.401243    4271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:07:19.404549    4271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:07:19.407619    4271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:07:19.410951    4271 config.go:182] Loaded profile config "force-systemd-env-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:07:19.411027    4271 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:07:19.411076    4271 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:07:19.415637    4271 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:07:19.422608    4271 start.go:297] selected driver: qemu2
	I0924 12:07:19.422614    4271 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:07:19.422620    4271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:07:19.425123    4271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:07:19.428566    4271 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:07:19.431696    4271 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 12:07:19.431712    4271 cni.go:84] Creating CNI manager for ""
	I0924 12:07:19.431733    4271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:07:19.431747    4271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:07:19.431783    4271 start.go:340] cluster config:
	{Name:force-systemd-flag-171000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:07:19.435746    4271 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:07:19.442613    4271 out.go:177] * Starting "force-systemd-flag-171000" primary control-plane node in "force-systemd-flag-171000" cluster
	I0924 12:07:19.446600    4271 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:07:19.446622    4271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:07:19.446635    4271 cache.go:56] Caching tarball of preloaded images
	I0924 12:07:19.446715    4271 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:07:19.446723    4271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:07:19.446798    4271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/force-systemd-flag-171000/config.json ...
	I0924 12:07:19.446811    4271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/force-systemd-flag-171000/config.json: {Name:mke8e926aaa8ee07f571d3c3cd29388e8cd17faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:07:19.447064    4271 start.go:360] acquireMachinesLock for force-systemd-flag-171000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:07:19.447107    4271 start.go:364] duration metric: took 32.75µs to acquireMachinesLock for "force-systemd-flag-171000"
	I0924 12:07:19.447122    4271 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:07:19.447158    4271 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:07:19.454579    4271 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0924 12:07:19.474181    4271 start.go:159] libmachine.API.Create for "force-systemd-flag-171000" (driver="qemu2")
	I0924 12:07:19.474210    4271 client.go:168] LocalClient.Create starting
	I0924 12:07:19.474286    4271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:07:19.474322    4271 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:19.474337    4271 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:19.474377    4271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:07:19.474403    4271 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:19.474414    4271 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:19.474846    4271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:07:19.635756    4271 main.go:141] libmachine: Creating SSH key...
	I0924 12:07:19.683656    4271 main.go:141] libmachine: Creating Disk image...
	I0924 12:07:19.683662    4271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:07:19.683837    4271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2
	I0924 12:07:19.692814    4271 main.go:141] libmachine: STDOUT: 
	I0924 12:07:19.692832    4271 main.go:141] libmachine: STDERR: 
	I0924 12:07:19.692893    4271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2 +20000M
	I0924 12:07:19.700560    4271 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:07:19.700571    4271 main.go:141] libmachine: STDERR: 
	I0924 12:07:19.700584    4271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2
	I0924 12:07:19.700595    4271 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:07:19.700605    4271 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:07:19.700631    4271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:dd:90:e9:6d:28 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2
	I0924 12:07:19.702183    4271 main.go:141] libmachine: STDOUT: 
	I0924 12:07:19.702195    4271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:07:19.702217    4271 client.go:171] duration metric: took 228.000542ms to LocalClient.Create
	I0924 12:07:21.704381    4271 start.go:128] duration metric: took 2.257217625s to createHost
	I0924 12:07:21.704440    4271 start.go:83] releasing machines lock for "force-systemd-flag-171000", held for 2.257334292s
	W0924 12:07:21.704544    4271 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:21.733586    4271 out.go:177] * Deleting "force-systemd-flag-171000" in qemu2 ...
	W0924 12:07:21.757502    4271 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:21.757516    4271 start.go:729] Will try again in 5 seconds ...
	I0924 12:07:26.759703    4271 start.go:360] acquireMachinesLock for force-systemd-flag-171000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:07:26.760006    4271 start.go:364] duration metric: took 243.583µs to acquireMachinesLock for "force-systemd-flag-171000"
	I0924 12:07:26.760075    4271 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:07:26.760373    4271 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:07:26.773215    4271 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0924 12:07:26.816628    4271 start.go:159] libmachine.API.Create for "force-systemd-flag-171000" (driver="qemu2")
	I0924 12:07:26.816667    4271 client.go:168] LocalClient.Create starting
	I0924 12:07:26.816787    4271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:07:26.816863    4271 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:26.816882    4271 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:26.816947    4271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:07:26.817010    4271 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:26.817022    4271 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:26.817751    4271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:07:27.005671    4271 main.go:141] libmachine: Creating SSH key...
	I0924 12:07:27.193015    4271 main.go:141] libmachine: Creating Disk image...
	I0924 12:07:27.193022    4271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:07:27.193230    4271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2
	I0924 12:07:27.202743    4271 main.go:141] libmachine: STDOUT: 
	I0924 12:07:27.202763    4271 main.go:141] libmachine: STDERR: 
	I0924 12:07:27.202843    4271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2 +20000M
	I0924 12:07:27.210808    4271 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:07:27.210822    4271 main.go:141] libmachine: STDERR: 
	I0924 12:07:27.210833    4271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2
	I0924 12:07:27.210838    4271 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:07:27.210846    4271 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:07:27.210880    4271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:b4:f1:44:be:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-flag-171000/disk.qcow2
	I0924 12:07:27.212485    4271 main.go:141] libmachine: STDOUT: 
	I0924 12:07:27.212498    4271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:07:27.212511    4271 client.go:171] duration metric: took 395.841167ms to LocalClient.Create
	I0924 12:07:29.214675    4271 start.go:128] duration metric: took 2.45428525s to createHost
	I0924 12:07:29.214728    4271 start.go:83] releasing machines lock for "force-systemd-flag-171000", held for 2.454713625s
	W0924 12:07:29.215191    4271 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:29.230825    4271 out.go:201] 
	W0924 12:07:29.245158    4271 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:07:29.245189    4271 out.go:270] * 
	* 
	W0924 12:07:29.247814    4271 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:07:29.257817    4271 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-171000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-171000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-171000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.460292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-171000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-171000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-171000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-24 12:07:29.354922 -0700 PDT m=+2925.290612667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-171000 -n force-systemd-flag-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-171000 -n force-systemd-flag-171000: exit status 7 (35.649583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-171000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-171000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-171000
--- FAIL: TestForceSystemdFlag (10.16s)

                                                
                                    
x
+
TestForceSystemdEnv (11.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-881000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0924 12:07:13.982148    1598 install.go:79] stdout: 
W0924 12:07:13.982305    1598 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0924 12:07:13.982326    1598 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/001/docker-machine-driver-hyperkit]
I0924 12:07:13.997571    1598 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/001/docker-machine-driver-hyperkit]
I0924 12:07:14.008486    1598 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/001/docker-machine-driver-hyperkit]
I0924 12:07:14.017563    1598 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/001/docker-machine-driver-hyperkit]
I0924 12:07:14.033450    1598 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0924 12:07:14.033562    1598 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0924 12:07:15.818647    1598 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0924 12:07:15.818667    1598 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0924 12:07:15.818724    1598 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0924 12:07:15.818752    1598 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/002/docker-machine-driver-hyperkit
I0924 12:07:16.206546    1598 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x104772d40 0x104772d40 0x104772d40 0x104772d40 0x104772d40 0x104772d40 0x104772d40] Decompressors:map[bz2:0x14000715bb0 gz:0x14000715bb8 tar:0x14000715b60 tar.bz2:0x14000715b70 tar.gz:0x14000715b80 tar.xz:0x14000715b90 tar.zst:0x14000715ba0 tbz2:0x14000715b70 tgz:0x14000715b80 txz:0x14000715b90 tzst:0x14000715ba0 xz:0x14000715bc0 zip:0x14000715bd0 zst:0x14000715bc8] Getters:map[file:0x140017548f0 http:0x1400010fd60 https:0x1400010fdb0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0924 12:07:16.206665    1598 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/002/docker-machine-driver-hyperkit
I0924 12:07:19.268345    1598 install.go:79] stdout: 
W0924 12:07:19.268515    1598 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0924 12:07:19.268539    1598 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/002/docker-machine-driver-hyperkit]
I0924 12:07:19.282741    1598 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/002/docker-machine-driver-hyperkit]
I0924 12:07:19.294365    1598 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/002/docker-machine-driver-hyperkit]
I0924 12:07:19.303113    1598 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-881000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.34932725s)

                                                
                                                
-- stdout --
	* [force-systemd-env-881000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-881000" primary control-plane node in "force-systemd-env-881000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-881000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:07:12.812409    4239 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:07:12.812549    4239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:07:12.812552    4239 out.go:358] Setting ErrFile to fd 2...
	I0924 12:07:12.812554    4239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:07:12.812679    4239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:07:12.813717    4239 out.go:352] Setting JSON to false
	I0924 12:07:12.829566    4239 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4003,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:07:12.829639    4239 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:07:12.836227    4239 out.go:177] * [force-systemd-env-881000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:07:12.844151    4239 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:07:12.844201    4239 notify.go:220] Checking for updates...
	I0924 12:07:12.850129    4239 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:07:12.853090    4239 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:07:12.856093    4239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:07:12.859121    4239 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:07:12.862158    4239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0924 12:07:12.865473    4239 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:07:12.865520    4239 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:07:12.870053    4239 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:07:12.877036    4239 start.go:297] selected driver: qemu2
	I0924 12:07:12.877043    4239 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:07:12.877057    4239 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:07:12.879484    4239 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:07:12.882031    4239 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:07:12.885219    4239 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 12:07:12.885235    4239 cni.go:84] Creating CNI manager for ""
	I0924 12:07:12.885261    4239 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:07:12.885265    4239 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:07:12.885305    4239 start.go:340] cluster config:
	{Name:force-systemd-env-881000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:07:12.888980    4239 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:07:12.896073    4239 out.go:177] * Starting "force-systemd-env-881000" primary control-plane node in "force-systemd-env-881000" cluster
	I0924 12:07:12.900106    4239 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:07:12.900125    4239 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:07:12.900136    4239 cache.go:56] Caching tarball of preloaded images
	I0924 12:07:12.900224    4239 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:07:12.900230    4239 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:07:12.900293    4239 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/force-systemd-env-881000/config.json ...
	I0924 12:07:12.900304    4239 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/force-systemd-env-881000/config.json: {Name:mk04f4135aaa67e21d431219e55efe8236b07681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:07:12.900516    4239 start.go:360] acquireMachinesLock for force-systemd-env-881000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:07:12.900550    4239 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "force-systemd-env-881000"
	I0924 12:07:12.900563    4239 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:07:12.900594    4239 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:07:12.909123    4239 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0924 12:07:12.926443    4239 start.go:159] libmachine.API.Create for "force-systemd-env-881000" (driver="qemu2")
	I0924 12:07:12.926470    4239 client.go:168] LocalClient.Create starting
	I0924 12:07:12.926532    4239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:07:12.926567    4239 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:12.926579    4239 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:12.926620    4239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:07:12.926644    4239 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:12.926653    4239 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:12.927039    4239 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:07:13.088539    4239 main.go:141] libmachine: Creating SSH key...
	I0924 12:07:13.247901    4239 main.go:141] libmachine: Creating Disk image...
	I0924 12:07:13.247909    4239 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:07:13.248104    4239 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2
	I0924 12:07:13.257513    4239 main.go:141] libmachine: STDOUT: 
	I0924 12:07:13.257534    4239 main.go:141] libmachine: STDERR: 
	I0924 12:07:13.257594    4239 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2 +20000M
	I0924 12:07:13.265784    4239 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:07:13.265806    4239 main.go:141] libmachine: STDERR: 
	I0924 12:07:13.265818    4239 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2
	I0924 12:07:13.265824    4239 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:07:13.265836    4239 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:07:13.265863    4239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:d2:15:35:ad:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2
	I0924 12:07:13.267521    4239 main.go:141] libmachine: STDOUT: 
	I0924 12:07:13.267539    4239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:07:13.267562    4239 client.go:171] duration metric: took 341.08775ms to LocalClient.Create
	I0924 12:07:15.269624    4239 start.go:128] duration metric: took 2.369033625s to createHost
	I0924 12:07:15.269640    4239 start.go:83] releasing machines lock for "force-systemd-env-881000", held for 2.369098208s
	W0924 12:07:15.269669    4239 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:15.283214    4239 out.go:177] * Deleting "force-systemd-env-881000" in qemu2 ...
	W0924 12:07:15.296253    4239 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:15.296259    4239 start.go:729] Will try again in 5 seconds ...
	I0924 12:07:20.298462    4239 start.go:360] acquireMachinesLock for force-systemd-env-881000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:07:21.704612    4239 start.go:364] duration metric: took 1.406064292s to acquireMachinesLock for "force-systemd-env-881000"
	I0924 12:07:21.704760    4239 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:07:21.705030    4239 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:07:21.716645    4239 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0924 12:07:21.767252    4239 start.go:159] libmachine.API.Create for "force-systemd-env-881000" (driver="qemu2")
	I0924 12:07:21.767304    4239 client.go:168] LocalClient.Create starting
	I0924 12:07:21.767425    4239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:07:21.767490    4239 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:21.767509    4239 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:21.767571    4239 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:07:21.767617    4239 main.go:141] libmachine: Decoding PEM data...
	I0924 12:07:21.767633    4239 main.go:141] libmachine: Parsing certificate...
	I0924 12:07:21.768197    4239 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:07:21.969842    4239 main.go:141] libmachine: Creating SSH key...
	I0924 12:07:22.055958    4239 main.go:141] libmachine: Creating Disk image...
	I0924 12:07:22.055964    4239 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:07:22.056153    4239 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2
	I0924 12:07:22.065487    4239 main.go:141] libmachine: STDOUT: 
	I0924 12:07:22.065518    4239 main.go:141] libmachine: STDERR: 
	I0924 12:07:22.065573    4239 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2 +20000M
	I0924 12:07:22.073262    4239 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:07:22.073285    4239 main.go:141] libmachine: STDERR: 
	I0924 12:07:22.073305    4239 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2
	I0924 12:07:22.073310    4239 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:07:22.073318    4239 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:07:22.073354    4239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:f8:78:4e:0c:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/force-systemd-env-881000/disk.qcow2
	I0924 12:07:22.074935    4239 main.go:141] libmachine: STDOUT: 
	I0924 12:07:22.074948    4239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:07:22.074960    4239 client.go:171] duration metric: took 307.651333ms to LocalClient.Create
	I0924 12:07:24.077288    4239 start.go:128] duration metric: took 2.372198042s to createHost
	I0924 12:07:24.077378    4239 start.go:83] releasing machines lock for "force-systemd-env-881000", held for 2.372736792s
	W0924 12:07:24.077725    4239 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:07:24.083497    4239 out.go:201] 
	W0924 12:07:24.105499    4239 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:07:24.105529    4239 out.go:270] * 
	* 
	W0924 12:07:24.108070    4239 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:07:24.117338    4239 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-881000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-881000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-881000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.026208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-881000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-881000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-881000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-24 12:07:24.211456 -0700 PDT m=+2920.147120376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-881000 -n force-systemd-env-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-881000 -n force-systemd-env-881000: exit status 7 (33.98825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-881000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-881000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-881000
--- FAIL: TestForceSystemdEnv (11.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (42.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-313000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-313000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-vgvs5" [3d760abe-6791-4556-9eb4-a26441452c4d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-vgvs5" [3d760abe-6791-4556-9eb4-a26441452c4d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.011740708s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30294
functional_test.go:1661: error fetching http://192.168.105.4:30294: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
I0924 11:38:09.461032    1598 retry.go:31] will retry after 792.783371ms: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30294: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
I0924 11:38:10.257515    1598 retry.go:31] will retry after 849.242395ms: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30294: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
I0924 11:38:11.108116    1598 retry.go:31] will retry after 1.730636085s: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30294: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
I0924 11:38:12.840343    1598 retry.go:31] will retry after 2.777171402s: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30294: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
I0924 11:38:15.620090    1598 retry.go:31] will retry after 7.210115439s: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
2024/09/24 11:38:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1661: error fetching http://192.168.105.4:30294: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
I0924 11:38:22.831374    1598 retry.go:31] will retry after 8.755376248s: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30294: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
I0924 11:38:31.590936    1598 retry.go:31] will retry after 7.365499482s: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30294: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30294: Get "http://192.168.105.4:30294": dial tcp 192.168.105.4:30294: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-313000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-vgvs5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-313000/192.168.105.4
Start Time:       Tue, 24 Sep 2024 11:37:57 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://dde56c695d86b8d36911e57651fa33875772093df8f0c8162f084661c135c73d
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 24 Sep 2024 11:38:21 -0700
Finished:     Tue, 24 Sep 2024 11:38:21 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hvjzs (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hvjzs:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  41s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-vgvs5 to functional-313000
Normal   Pulling    42s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     36s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 5.406s (5.406s including waiting). Image size: 84957542 bytes.
Normal   Created    18s (x3 over 36s)  kubelet            Created container echoserver-arm
Normal   Started    18s (x3 over 36s)  kubelet            Started container echoserver-arm
Normal   Pulled     18s (x2 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    4s (x5 over 34s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-vgvs5_default(3d760abe-6791-4556-9eb4-a26441452c4d)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-313000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-313000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.83.243
IPs:                      10.106.83.243
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30294/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-313000 -n functional-313000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-313000 ssh findmnt        | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:37 PDT |                     |
	|                | -T /mount3                           |                   |         |         |                     |                     |
	| ssh            | functional-313000 ssh findmnt        | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | -T /mount1                           |                   |         |         |                     |                     |
	| ssh            | functional-313000 ssh findmnt        | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | -T /mount2                           |                   |         |         |                     |                     |
	| ssh            | functional-313000 ssh findmnt        | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | -T /mount3                           |                   |         |         |                     |                     |
	| mount          | -p functional-313000                 | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT |                     |
	|                | --kill=true                          |                   |         |         |                     |                     |
	| service        | functional-313000 service            | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | hello-node-connect --url             |                   |         |         |                     |                     |
	| service        | functional-313000 service list       | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	| service        | functional-313000 service list       | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | -o json                              |                   |         |         |                     |                     |
	| service        | functional-313000 service            | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | --namespace=default --https          |                   |         |         |                     |                     |
	|                | --url hello-node                     |                   |         |         |                     |                     |
	| service        | functional-313000                    | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | service hello-node --url             |                   |         |         |                     |                     |
	|                | --format={{.IP}}                     |                   |         |         |                     |                     |
	| service        | functional-313000 service            | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | hello-node --url                     |                   |         |         |                     |                     |
	| start          | -p functional-313000                 | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT |                     |
	|                | --dry-run --memory                   |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                   |         |         |                     |                     |
	|                | --driver=qemu2                       |                   |         |         |                     |                     |
	| start          | -p functional-313000                 | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT |                     |
	|                | --dry-run --memory                   |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                   |         |         |                     |                     |
	|                | --driver=qemu2                       |                   |         |         |                     |                     |
	| start          | -p functional-313000 --dry-run       | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT |                     |
	|                | --alsologtostderr -v=1               |                   |         |         |                     |                     |
	|                | --driver=qemu2                       |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | -p functional-313000                 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                   |         |         |                     |                     |
	| image          | functional-313000                    | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | image ls --format short              |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| image          | functional-313000                    | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | image ls --format yaml               |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| ssh            | functional-313000 ssh pgrep          | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT |                     |
	|                | buildkitd                            |                   |         |         |                     |                     |
	| image          | functional-313000 image build -t     | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | localhost/my-image:functional-313000 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                   |         |         |                     |                     |
	| image          | functional-313000 image ls           | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	| image          | functional-313000                    | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | image ls --format json               |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| image          | functional-313000                    | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | image ls --format table              |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| update-context | functional-313000                    | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | update-context                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |         |         |                     |                     |
	| update-context | functional-313000                    | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | update-context                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |         |         |                     |                     |
	| update-context | functional-313000                    | functional-313000 | jenkins | v1.34.0 | 24 Sep 24 11:38 PDT | 24 Sep 24 11:38 PDT |
	|                | update-context                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |         |         |                     |                     |
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 11:38:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 11:38:12.227361    2785 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:38:12.227497    2785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:38:12.227500    2785 out.go:358] Setting ErrFile to fd 2...
	I0924 11:38:12.227502    2785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:38:12.227658    2785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 11:38:12.228756    2785 out.go:352] Setting JSON to false
	I0924 11:38:12.245463    2785 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2263,"bootTime":1727200829,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 11:38:12.245545    2785 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 11:38:12.250003    2785 out.go:177] * [functional-313000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 11:38:12.257094    2785 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 11:38:12.257208    2785 notify.go:220] Checking for updates...
	I0924 11:38:12.264026    2785 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 11:38:12.267020    2785 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 11:38:12.269954    2785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 11:38:12.273072    2785 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 11:38:12.276009    2785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 11:38:12.279186    2785 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 11:38:12.279436    2785 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 11:38:12.284024    2785 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 11:38:12.289978    2785 start.go:297] selected driver: qemu2
	I0924 11:38:12.289983    2785 start.go:901] validating driver "qemu2" against &{Name:functional-313000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-313000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:38:12.290035    2785 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 11:38:12.292282    2785 cni.go:84] Creating CNI manager for ""
	I0924 11:38:12.292312    2785 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 11:38:12.292347    2785 start.go:340] cluster config:
	{Name:functional-313000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-313000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:38:12.303150    2785 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 24 18:38:18 functional-313000 dockerd[5981]: time="2024-09-24T18:38:18.154267269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 24 18:38:18 functional-313000 dockerd[5975]: time="2024-09-24T18:38:18.236696764Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" spanID=e3a2ab4e6a50e6e1 traceID=8ab46c33526f69e2c90a86d8aac3692e
	Sep 24 18:38:19 functional-313000 cri-dockerd[6230]: time="2024-09-24T18:38:19Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 24 18:38:19 functional-313000 dockerd[5981]: time="2024-09-24T18:38:19.830491826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 24 18:38:19 functional-313000 dockerd[5981]: time="2024-09-24T18:38:19.830532578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 24 18:38:19 functional-313000 dockerd[5981]: time="2024-09-24T18:38:19.830542037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 24 18:38:19 functional-313000 dockerd[5981]: time="2024-09-24T18:38:19.830581123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 24 18:38:20 functional-313000 dockerd[5981]: time="2024-09-24T18:38:20.418378459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 24 18:38:20 functional-313000 dockerd[5981]: time="2024-09-24T18:38:20.418439671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 24 18:38:20 functional-313000 dockerd[5981]: time="2024-09-24T18:38:20.418451088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 24 18:38:20 functional-313000 dockerd[5981]: time="2024-09-24T18:38:20.418528925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 24 18:38:20 functional-313000 dockerd[5975]: time="2024-09-24T18:38:20.444963455Z" level=info msg="ignoring event" container=3021004bc8926fcb47611917fc23e1bff885fde80a9e1c89ead5af3050e4a71d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:38:20 functional-313000 dockerd[5981]: time="2024-09-24T18:38:20.445183258Z" level=info msg="shim disconnected" id=3021004bc8926fcb47611917fc23e1bff885fde80a9e1c89ead5af3050e4a71d namespace=moby
	Sep 24 18:38:20 functional-313000 dockerd[5981]: time="2024-09-24T18:38:20.445219134Z" level=warning msg="cleaning up after shim disconnected" id=3021004bc8926fcb47611917fc23e1bff885fde80a9e1c89ead5af3050e4a71d namespace=moby
	Sep 24 18:38:20 functional-313000 dockerd[5981]: time="2024-09-24T18:38:20.445223760Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 24 18:38:20 functional-313000 dockerd[5975]: 2024/09/24 18:38:20 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Sep 24 18:38:20 functional-313000 dockerd[5975]: 2024/09/24 18:38:20 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Sep 24 18:38:21 functional-313000 dockerd[5981]: time="2024-09-24T18:38:21.435512022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 24 18:38:21 functional-313000 dockerd[5981]: time="2024-09-24T18:38:21.435934461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 24 18:38:21 functional-313000 dockerd[5981]: time="2024-09-24T18:38:21.436234059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 24 18:38:21 functional-313000 dockerd[5981]: time="2024-09-24T18:38:21.436391401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 24 18:38:21 functional-313000 dockerd[5975]: time="2024-09-24T18:38:21.467113902Z" level=info msg="ignoring event" container=dde56c695d86b8d36911e57651fa33875772093df8f0c8162f084661c135c73d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:38:21 functional-313000 dockerd[5981]: time="2024-09-24T18:38:21.467188906Z" level=info msg="shim disconnected" id=dde56c695d86b8d36911e57651fa33875772093df8f0c8162f084661c135c73d namespace=moby
	Sep 24 18:38:21 functional-313000 dockerd[5981]: time="2024-09-24T18:38:21.467225158Z" level=warning msg="cleaning up after shim disconnected" id=dde56c695d86b8d36911e57651fa33875772093df8f0c8162f084661c135c73d namespace=moby
	Sep 24 18:38:21 functional-313000 dockerd[5981]: time="2024-09-24T18:38:21.467229241Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	dde56c695d86b       72565bf5bbedf                                                                                          18 seconds ago       Exited              echoserver-arm              2                   f5e7541cb5736       hello-node-connect-65d86f57f4-vgvs5
	3021004bc8926       72565bf5bbedf                                                                                          19 seconds ago       Exited              echoserver-arm              2                   fffb709f20fc6       hello-node-64b4f8f9ff-tnb87
	d1424336e75da       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   20 seconds ago       Running             dashboard-metrics-scraper   0                   0e0fe66d08e81       dashboard-metrics-scraper-c5db448b4-kwvqt
	a853c0da6cce9       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         21 seconds ago       Running             kubernetes-dashboard        0                   1e5694d2ff793       kubernetes-dashboard-695b96c756-6h6ls
	5941decb5a46f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    44 seconds ago       Exited              mount-munger                0                   0d8330c51b9b3       busybox-mount
	e8b4265abe215       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                          48 seconds ago       Running             myfrontend                  0                   7cea9be0933f4       sp-pod
	43612444d1c03       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                          53 seconds ago       Running             nginx                       0                   d6d642eab165e       nginx-svc
	625946d6b3be7       2f6c962e7b831                                                                                          About a minute ago   Running             coredns                     2                   daf1aa997830f       coredns-7c65d6cfc9-slnxh
	b9e8d75aa9da5       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         3                   30f24d2b937e9       storage-provisioner
	d09d9c01e5e06       24a140c548c07                                                                                          About a minute ago   Running             kube-proxy                  2                   4c4c823df297e       kube-proxy-ld4cl
	32fd42f8103cf       27e3830e14027                                                                                          About a minute ago   Running             etcd                        2                   07996f15b5f36       etcd-functional-313000
	de7cc0f686b78       279f381cb3736                                                                                          About a minute ago   Running             kube-controller-manager     2                   389ba0076d130       kube-controller-manager-functional-313000
	6030174de91ca       7f8aa378bb47d                                                                                          About a minute ago   Running             kube-scheduler              2                   33488ede2a3a0       kube-scheduler-functional-313000
	5327286a9af28       d3f53a98c0a9d                                                                                          About a minute ago   Running             kube-apiserver              0                   c2cd03aa2aba6       kube-apiserver-functional-313000
	90ac6a6a96599       ba04bb24b9575                                                                                          2 minutes ago        Exited              storage-provisioner         2                   01a06a8fafaf1       storage-provisioner
	f1ea83241b627       2f6c962e7b831                                                                                          2 minutes ago        Exited              coredns                     1                   b3d526d132afb       coredns-7c65d6cfc9-slnxh
	11bd363edd5ab       24a140c548c07                                                                                          2 minutes ago        Exited              kube-proxy                  1                   f857bec44851f       kube-proxy-ld4cl
	498bad10af5cb       7f8aa378bb47d                                                                                          2 minutes ago        Exited              kube-scheduler              1                   813524291be94       kube-scheduler-functional-313000
	1df362c2b8767       279f381cb3736                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   f649479f94afd       kube-controller-manager-functional-313000
	6b6ec6032daa5       27e3830e14027                                                                                          2 minutes ago        Exited              etcd                        1                   2a57d7862723b       etcd-functional-313000
	
	
	==> coredns [625946d6b3be] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52244 - 39695 "HINFO IN 5624122856818864838.8741546012939312438. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.09271674s
	[INFO] 10.244.0.1:23464 - 23677 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000095047s
	[INFO] 10.244.0.1:30795 - 289 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000164218s
	[INFO] 10.244.0.1:59982 - 58628 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000985095s
	[INFO] 10.244.0.1:49382 - 25255 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000094421s
	[INFO] 10.244.0.1:51006 - 31371 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000065337s
	[INFO] 10.244.0.1:56097 - 25365 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000171843s
	
	
	==> coredns [f1ea83241b62] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55424 - 1802 "HINFO IN 963030209142510188.451304879446872648. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.044597295s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-313000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-313000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=functional-313000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T11_35_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:35:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-313000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:38:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:38:39 +0000   Tue, 24 Sep 2024 18:35:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:38:39 +0000   Tue, 24 Sep 2024 18:35:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:38:39 +0000   Tue, 24 Sep 2024 18:35:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:38:39 +0000   Tue, 24 Sep 2024 18:35:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-313000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 4dcd1c2ad3fb4d6e8e54670b06f6fb23
	  System UUID:                4dcd1c2ad3fb4d6e8e54670b06f6fb23
	  Boot ID:                    93725ab6-0e9e-4a41-af79-79353c0c78d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-tnb87                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  default                     hello-node-connect-65d86f57f4-vgvs5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 coredns-7c65d6cfc9-slnxh                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m48s
	  kube-system                 etcd-functional-313000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m53s
	  kube-system                 kube-apiserver-functional-313000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-functional-313000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kube-proxy-ld4cl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  kube-system                 kube-scheduler-functional-313000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-kwvqt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-6h6ls        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  Starting                 90s                    kube-proxy       
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s                  kubelet          Node functional-313000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m53s                  kubelet          Node functional-313000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s                  kubelet          Node functional-313000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m53s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m50s                  kubelet          Node functional-313000 status is now: NodeReady
	  Normal  RegisteredNode           2m48s                  node-controller  Node functional-313000 event: Registered Node functional-313000 in Controller
	  Normal  NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node functional-313000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node functional-313000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m19s (x7 over 2m19s)  kubelet          Node functional-313000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m13s                  node-controller  Node functional-313000 event: Registered Node functional-313000 in Controller
	  Normal  Starting                 95s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s (x8 over 95s)      kubelet          Node functional-313000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x8 over 95s)      kubelet          Node functional-313000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x7 over 95s)      kubelet          Node functional-313000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s                    node-controller  Node functional-313000 event: Registered Node functional-313000 in Controller
	
	
	==> dmesg <==
	[  +9.702256] kauditd_printk_skb: 36 callbacks suppressed
	[  +4.467599] systemd-fstab-generator[5053]: Ignoring "noauto" option for root device
	[ +12.192859] systemd-fstab-generator[5497]: Ignoring "noauto" option for root device
	[  +0.053383] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.114767] systemd-fstab-generator[5530]: Ignoring "noauto" option for root device
	[  +0.111870] systemd-fstab-generator[5542]: Ignoring "noauto" option for root device
	[  +0.114302] systemd-fstab-generator[5556]: Ignoring "noauto" option for root device
	[  +5.116906] kauditd_printk_skb: 89 callbacks suppressed
	[Sep24 18:37] systemd-fstab-generator[6183]: Ignoring "noauto" option for root device
	[  +0.095847] systemd-fstab-generator[6195]: Ignoring "noauto" option for root device
	[  +0.084182] systemd-fstab-generator[6207]: Ignoring "noauto" option for root device
	[  +0.107222] systemd-fstab-generator[6222]: Ignoring "noauto" option for root device
	[  +0.223053] systemd-fstab-generator[6391]: Ignoring "noauto" option for root device
	[  +1.123296] systemd-fstab-generator[6515]: Ignoring "noauto" option for root device
	[  +4.409910] kauditd_printk_skb: 199 callbacks suppressed
	[  +5.756578] kauditd_printk_skb: 33 callbacks suppressed
	[  +8.911095] systemd-fstab-generator[7584]: Ignoring "noauto" option for root device
	[  +6.397594] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.557611] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.104704] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.089266] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.128396] kauditd_printk_skb: 27 callbacks suppressed
	[Sep24 18:38] kauditd_printk_skb: 19 callbacks suppressed
	[ +11.483673] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.292129] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [32fd42f8103c] <==
	{"level":"info","ts":"2024-09-24T18:37:05.185099Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-24T18:37:05.185124Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-24T18:37:05.185156Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-24T18:37:05.185267Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-24T18:37:05.185318Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-24T18:37:05.191100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-09-24T18:37:05.191146Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-24T18:37:05.191205Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:37:05.191246Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:37:06.850156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-24T18:37:06.850311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-24T18:37:06.850356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-24T18:37:06.850390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-24T18:37:06.850408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-24T18:37:06.850814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-24T18:37:06.850865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-24T18:37:06.856137Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T18:37:06.856118Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-313000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T18:37:06.856489Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T18:37:06.856845Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T18:37:06.856870Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T18:37:06.858086Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T18:37:06.858086Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T18:37:06.860439Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-24T18:37:06.860567Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [6b6ec6032daa] <==
	{"level":"info","ts":"2024-09-24T18:36:22.101339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-24T18:36:22.101727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-24T18:36:22.101758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-24T18:36:22.101787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-24T18:36:22.101815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-24T18:36:22.104196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T18:36:22.104590Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T18:36:22.104199Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-313000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T18:36:22.105124Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T18:36:22.105239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T18:36:22.106593Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T18:36:22.106594Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T18:36:22.109402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T18:36:22.109472Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-24T18:36:50.231104Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-24T18:36:50.231144Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-313000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-24T18:36:50.231187Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T18:36:50.231229Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/09/24 18:36:50 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-24T18:36:50.237426Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-24T18:36:50.237449Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-24T18:36:50.237474Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-24T18:36:50.238825Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-24T18:36:50.238859Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-24T18:36:50.238863Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-313000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 18:38:39 up 3 min,  0 users,  load average: 0.71, 0.52, 0.22
	Linux functional-313000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5327286a9af2] <==
	I0924 18:37:07.464028       1 autoregister_controller.go:144] Starting autoregister controller
	I0924 18:37:07.464034       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0924 18:37:07.464054       1 cache.go:39] Caches are synced for autoregister controller
	I0924 18:37:07.490104       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0924 18:37:07.490120       1 policy_source.go:224] refreshing policies
	I0924 18:37:07.493363       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 18:37:08.379083       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0924 18:37:08.562190       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0924 18:37:08.562679       1 controller.go:615] quota admission added evaluator for: endpoints
	I0924 18:37:08.564721       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 18:37:08.995833       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0924 18:37:09.000167       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0924 18:37:09.011572       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0924 18:37:09.021777       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 18:37:09.023850       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 18:37:27.980308       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.150.148"}
	I0924 18:37:38.295585       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.100.238"}
	E0924 18:37:49.396707       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49731: use of closed network connection
	E0924 18:37:57.156042       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49745: use of closed network connection
	I0924 18:37:57.291273       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0924 18:37:57.332717       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.83.243"}
	I0924 18:38:01.384611       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.82.95"}
	I0924 18:38:12.767507       1 controller.go:615] quota admission added evaluator for: namespaces
	I0924 18:38:12.867962       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.113.236"}
	I0924 18:38:12.878670       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.84.207"}
	
	
	==> kube-controller-manager [1df362c2b876] <==
	I0924 18:36:25.989366       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0924 18:36:25.998724       1 shared_informer.go:320] Caches are synced for ephemeral
	I0924 18:36:25.998772       1 shared_informer.go:320] Caches are synced for stateful set
	I0924 18:36:25.998777       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0924 18:36:25.998923       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0924 18:36:25.999057       1 shared_informer.go:320] Caches are synced for crt configmap
	I0924 18:36:25.999853       1 shared_informer.go:320] Caches are synced for deployment
	I0924 18:36:26.000104       1 shared_informer.go:320] Caches are synced for GC
	I0924 18:36:26.003495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="37.119025ms"
	I0924 18:36:26.003584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="32.004µs"
	I0924 18:36:26.052768       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0924 18:36:26.052856       1 shared_informer.go:320] Caches are synced for endpoint
	I0924 18:36:26.064066       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0924 18:36:26.107616       1 shared_informer.go:320] Caches are synced for resource quota
	I0924 18:36:26.188391       1 shared_informer.go:320] Caches are synced for resource quota
	I0924 18:36:26.199060       1 shared_informer.go:320] Caches are synced for taint
	I0924 18:36:26.199141       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0924 18:36:26.199294       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-313000"
	I0924 18:36:26.199340       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0924 18:36:26.199595       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0924 18:36:26.620497       1 shared_informer.go:320] Caches are synced for garbage collector
	I0924 18:36:26.698836       1 shared_informer.go:320] Caches are synced for garbage collector
	I0924 18:36:26.699093       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0924 18:36:33.073854       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.928799ms"
	I0924 18:36:33.074788       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.588µs"
	
	
	==> kube-controller-manager [de7cc0f686b7] <==
	I0924 18:38:12.797439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.825293ms"
	E0924 18:38:12.797549       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0924 18:38:12.801578       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.501296ms"
	E0924 18:38:12.801619       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0924 18:38:12.802404       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.70253ms"
	E0924 18:38:12.802432       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0924 18:38:12.808072       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.271727ms"
	E0924 18:38:12.808411       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0924 18:38:12.808188       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.205848ms"
	E0924 18:38:12.808423       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0924 18:38:12.820357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.603812ms"
	I0924 18:38:12.842960       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="22.49336ms"
	I0924 18:38:12.843141       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="27.377µs"
	I0924 18:38:12.847282       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="30.415614ms"
	I0924 18:38:12.862760       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.360773ms"
	I0924 18:38:12.863575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="101.713µs"
	I0924 18:38:18.465347       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.162798ms"
	I0924 18:38:18.465595       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="29.876µs"
	I0924 18:38:20.474100       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="31.294µs"
	I0924 18:38:20.488788       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.448948ms"
	I0924 18:38:20.489171       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.918µs"
	I0924 18:38:21.493967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="22.335µs"
	I0924 18:38:22.511066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="29.793µs"
	I0924 18:38:34.410624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="52.253µs"
	I0924 18:38:39.142571       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-313000"
	
	
	==> kube-proxy [11bd363edd5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:36:23.616826       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 18:36:23.620409       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0924 18:36:23.620434       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:36:23.628657       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:36:23.628673       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:36:23.628686       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:36:23.629325       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:36:23.629401       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:36:23.629405       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:36:23.629989       1 config.go:199] "Starting service config controller"
	I0924 18:36:23.629994       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:36:23.630001       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:36:23.630003       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:36:23.630137       1 config.go:328] "Starting node config controller"
	I0924 18:36:23.630140       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:36:23.732005       1 shared_informer.go:320] Caches are synced for service config
	I0924 18:36:23.732005       1 shared_informer.go:320] Caches are synced for node config
	I0924 18:36:23.732030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d09d9c01e5e0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0924 18:37:08.873382       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0924 18:37:08.876612       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0924 18:37:08.876637       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:37:08.940270       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0924 18:37:08.940290       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0924 18:37:08.940304       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:37:08.941007       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:37:08.941093       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:37:08.941097       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:37:08.941738       1 config.go:199] "Starting service config controller"
	I0924 18:37:08.941743       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:37:08.941752       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:37:08.941754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:37:08.941880       1 config.go:328] "Starting node config controller"
	I0924 18:37:08.941883       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:37:09.041870       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:37:09.041897       1 shared_informer.go:320] Caches are synced for node config
	I0924 18:37:09.041870       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [498bad10af5c] <==
	I0924 18:36:21.175989       1 serving.go:386] Generated self-signed cert in-memory
	W0924 18:36:22.643349       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 18:36:22.643509       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 18:36:22.643536       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 18:36:22.643557       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 18:36:22.668253       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 18:36:22.668267       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:36:22.669742       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 18:36:22.680947       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 18:36:22.688535       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 18:36:22.681563       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 18:36:22.793622       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 18:36:50.238529       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0924 18:36:50.238718       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0924 18:36:50.238771       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [6030174de91c] <==
	I0924 18:37:05.405205       1 serving.go:386] Generated self-signed cert in-memory
	W0924 18:37:07.385570       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 18:37:07.385689       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 18:37:07.385719       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 18:37:07.385735       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 18:37:07.404393       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0924 18:37:07.404492       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:37:07.405557       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0924 18:37:07.405638       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0924 18:37:07.405679       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 18:37:07.405698       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0924 18:37:07.505966       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 18:38:05 functional-313000 kubelet[6522]: E0924 18:38:05.338468    6522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-vgvs5_default(3d760abe-6791-4556-9eb4-a26441452c4d)\"" pod="default/hello-node-connect-65d86f57f4-vgvs5" podUID="3d760abe-6791-4556-9eb4-a26441452c4d"
	Sep 24 18:38:06 functional-313000 kubelet[6522]: I0924 18:38:06.356184    6522 scope.go:117] "RemoveContainer" containerID="a9d07e37e1fa5d26be35acfdd217e5d1a5034d87558d97ab9b97c3cd49a8fafa"
	Sep 24 18:38:06 functional-313000 kubelet[6522]: I0924 18:38:06.358148    6522 scope.go:117] "RemoveContainer" containerID="4edba25df64a26ec4d1256b915851563f6747a2da21f5d96b6f65eca67295592"
	Sep 24 18:38:06 functional-313000 kubelet[6522]: E0924 18:38:06.358667    6522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-vgvs5_default(3d760abe-6791-4556-9eb4-a26441452c4d)\"" pod="default/hello-node-connect-65d86f57f4-vgvs5" podUID="3d760abe-6791-4556-9eb4-a26441452c4d"
	Sep 24 18:38:06 functional-313000 kubelet[6522]: E0924 18:38:06.361787    6522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-tnb87_default(ce3378d0-66e5-4ae6-94cf-7af0fae9dd9e)\"" pod="default/hello-node-64b4f8f9ff-tnb87" podUID="ce3378d0-66e5-4ae6-94cf-7af0fae9dd9e"
	Sep 24 18:38:12 functional-313000 kubelet[6522]: I0924 18:38:12.852286    6522 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7f01aa78-ae6d-47c4-b853-7f3397e49c15-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-kwvqt\" (UID: \"7f01aa78-ae6d-47c4-b853-7f3397e49c15\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-kwvqt"
	Sep 24 18:38:12 functional-313000 kubelet[6522]: I0924 18:38:12.852314    6522 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c25m7\" (UniqueName: \"kubernetes.io/projected/56c3a667-0f0c-487c-bf06-8e5ec611ce79-kube-api-access-c25m7\") pod \"kubernetes-dashboard-695b96c756-6h6ls\" (UID: \"56c3a667-0f0c-487c-bf06-8e5ec611ce79\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-6h6ls"
	Sep 24 18:38:12 functional-313000 kubelet[6522]: I0924 18:38:12.852323    6522 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/56c3a667-0f0c-487c-bf06-8e5ec611ce79-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-6h6ls\" (UID: \"56c3a667-0f0c-487c-bf06-8e5ec611ce79\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-6h6ls"
	Sep 24 18:38:12 functional-313000 kubelet[6522]: I0924 18:38:12.852365    6522 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgm49\" (UniqueName: \"kubernetes.io/projected/7f01aa78-ae6d-47c4-b853-7f3397e49c15-kube-api-access-wgm49\") pod \"dashboard-metrics-scraper-c5db448b4-kwvqt\" (UID: \"7f01aa78-ae6d-47c4-b853-7f3397e49c15\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-kwvqt"
	Sep 24 18:38:20 functional-313000 kubelet[6522]: I0924 18:38:20.373142    6522 scope.go:117] "RemoveContainer" containerID="a9d07e37e1fa5d26be35acfdd217e5d1a5034d87558d97ab9b97c3cd49a8fafa"
	Sep 24 18:38:20 functional-313000 kubelet[6522]: I0924 18:38:20.467328    6522 scope.go:117] "RemoveContainer" containerID="a9d07e37e1fa5d26be35acfdd217e5d1a5034d87558d97ab9b97c3cd49a8fafa"
	Sep 24 18:38:20 functional-313000 kubelet[6522]: I0924 18:38:20.467479    6522 scope.go:117] "RemoveContainer" containerID="3021004bc8926fcb47611917fc23e1bff885fde80a9e1c89ead5af3050e4a71d"
	Sep 24 18:38:20 functional-313000 kubelet[6522]: E0924 18:38:20.467550    6522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-tnb87_default(ce3378d0-66e5-4ae6-94cf-7af0fae9dd9e)\"" pod="default/hello-node-64b4f8f9ff-tnb87" podUID="ce3378d0-66e5-4ae6-94cf-7af0fae9dd9e"
	Sep 24 18:38:20 functional-313000 kubelet[6522]: I0924 18:38:20.474459    6522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-6h6ls" podStartSLOduration=3.719461056 podStartE2EDuration="8.474447974s" podCreationTimestamp="2024-09-24 18:38:12 +0000 UTC" firstStartedPulling="2024-09-24 18:38:13.260576748 +0000 UTC m=+68.946810079" lastFinishedPulling="2024-09-24 18:38:18.015563667 +0000 UTC m=+73.701796997" observedRunningTime="2024-09-24 18:38:18.461700144 +0000 UTC m=+74.147933433" watchObservedRunningTime="2024-09-24 18:38:20.474447974 +0000 UTC m=+76.160681263"
	Sep 24 18:38:20 functional-313000 kubelet[6522]: I0924 18:38:20.483556    6522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-kwvqt" podStartSLOduration=1.969860031 podStartE2EDuration="8.483542735s" podCreationTimestamp="2024-09-24 18:38:12 +0000 UTC" firstStartedPulling="2024-09-24 18:38:13.272440253 +0000 UTC m=+68.958673542" lastFinishedPulling="2024-09-24 18:38:19.786122957 +0000 UTC m=+75.472356246" observedRunningTime="2024-09-24 18:38:20.483452647 +0000 UTC m=+76.169685936" watchObservedRunningTime="2024-09-24 18:38:20.483542735 +0000 UTC m=+76.169776024"
	Sep 24 18:38:21 functional-313000 kubelet[6522]: I0924 18:38:21.373819    6522 scope.go:117] "RemoveContainer" containerID="4edba25df64a26ec4d1256b915851563f6747a2da21f5d96b6f65eca67295592"
	Sep 24 18:38:21 functional-313000 kubelet[6522]: I0924 18:38:21.489324    6522 scope.go:117] "RemoveContainer" containerID="dde56c695d86b8d36911e57651fa33875772093df8f0c8162f084661c135c73d"
	Sep 24 18:38:21 functional-313000 kubelet[6522]: E0924 18:38:21.489378    6522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-vgvs5_default(3d760abe-6791-4556-9eb4-a26441452c4d)\"" pod="default/hello-node-connect-65d86f57f4-vgvs5" podUID="3d760abe-6791-4556-9eb4-a26441452c4d"
	Sep 24 18:38:22 functional-313000 kubelet[6522]: I0924 18:38:22.503341    6522 scope.go:117] "RemoveContainer" containerID="4edba25df64a26ec4d1256b915851563f6747a2da21f5d96b6f65eca67295592"
	Sep 24 18:38:22 functional-313000 kubelet[6522]: I0924 18:38:22.503491    6522 scope.go:117] "RemoveContainer" containerID="dde56c695d86b8d36911e57651fa33875772093df8f0c8162f084661c135c73d"
	Sep 24 18:38:22 functional-313000 kubelet[6522]: E0924 18:38:22.503558    6522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-vgvs5_default(3d760abe-6791-4556-9eb4-a26441452c4d)\"" pod="default/hello-node-connect-65d86f57f4-vgvs5" podUID="3d760abe-6791-4556-9eb4-a26441452c4d"
	Sep 24 18:38:34 functional-313000 kubelet[6522]: I0924 18:38:34.374381    6522 scope.go:117] "RemoveContainer" containerID="3021004bc8926fcb47611917fc23e1bff885fde80a9e1c89ead5af3050e4a71d"
	Sep 24 18:38:34 functional-313000 kubelet[6522]: E0924 18:38:34.374779    6522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-tnb87_default(ce3378d0-66e5-4ae6-94cf-7af0fae9dd9e)\"" pod="default/hello-node-64b4f8f9ff-tnb87" podUID="ce3378d0-66e5-4ae6-94cf-7af0fae9dd9e"
	Sep 24 18:38:35 functional-313000 kubelet[6522]: I0924 18:38:35.374684    6522 scope.go:117] "RemoveContainer" containerID="dde56c695d86b8d36911e57651fa33875772093df8f0c8162f084661c135c73d"
	Sep 24 18:38:35 functional-313000 kubelet[6522]: E0924 18:38:35.375169    6522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-vgvs5_default(3d760abe-6791-4556-9eb4-a26441452c4d)\"" pod="default/hello-node-connect-65d86f57f4-vgvs5" podUID="3d760abe-6791-4556-9eb4-a26441452c4d"
	
	
	==> kubernetes-dashboard [a853c0da6cce] <==
	2024/09/24 18:38:18 Starting overwatch
	2024/09/24 18:38:18 Using namespace: kubernetes-dashboard
	2024/09/24 18:38:18 Using in-cluster config to connect to apiserver
	2024/09/24 18:38:18 Using secret token for csrf signing
	2024/09/24 18:38:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/24 18:38:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/24 18:38:18 Successful initial request to the apiserver, version: v1.31.1
	2024/09/24 18:38:18 Generating JWE encryption key
	2024/09/24 18:38:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/24 18:38:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/24 18:38:18 Initializing JWE encryption key from synchronized object
	2024/09/24 18:38:18 Creating in-cluster Sidecar client
	2024/09/24 18:38:18 Serving insecurely on HTTP port: 9090
	2024/09/24 18:38:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [90ac6a6a9659] <==
	I0924 18:36:36.180809       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 18:36:36.184428       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 18:36:36.184446       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [b9e8d75aa9da] <==
	I0924 18:37:08.943445       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 18:37:08.950631       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 18:37:08.950700       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 18:37:26.353221       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 18:37:26.353480       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a4455f6-9bbb-4cd3-8585-bbcc134218dc", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-313000_c0be8ee5-e495-4ed5-a3d3-8374e19ca6cc became leader
	I0924 18:37:26.353515       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-313000_c0be8ee5-e495-4ed5-a3d3-8374e19ca6cc!
	I0924 18:37:26.453979       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-313000_c0be8ee5-e495-4ed5-a3d3-8374e19ca6cc!
	I0924 18:37:38.178317       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0924 18:37:38.178426       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    e3fd0987-5d2a-4cef-bc25-6ffce79a13af 339 0 2024-09-24 18:35:52 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-24 18:35:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-ea94145a-6d56-41ce-bbb2-22b307ee03ff &PersistentVolumeClaim{ObjectMeta:{myclaim  default  ea94145a-6d56-41ce-bbb2-22b307ee03ff 659 0 2024-09-24 18:37:38 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-24 18:37:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-24 18:37:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0924 18:37:38.178811       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-ea94145a-6d56-41ce-bbb2-22b307ee03ff" provisioned
	I0924 18:37:38.178829       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0924 18:37:38.178845       1 volume_store.go:212] Trying to save persistentvolume "pvc-ea94145a-6d56-41ce-bbb2-22b307ee03ff"
	I0924 18:37:38.179660       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ea94145a-6d56-41ce-bbb2-22b307ee03ff", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0924 18:37:38.183862       1 volume_store.go:219] persistentvolume "pvc-ea94145a-6d56-41ce-bbb2-22b307ee03ff" saved
	I0924 18:37:38.184172       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ea94145a-6d56-41ce-bbb2-22b307ee03ff", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ea94145a-6d56-41ce-bbb2-22b307ee03ff
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-313000 -n functional-313000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-313000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-313000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-313000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-313000/192.168.105.4
	Start Time:       Tue, 24 Sep 2024 11:37:53 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://5941decb5a46f1f13ca2266c7c8c957514d0489a6a0ea0e0588ec9fd290fbebc
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 24 Sep 2024 11:37:55 -0700
	      Finished:     Tue, 24 Sep 2024 11:37:55 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hck7n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hck7n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  46s   default-scheduler  Successfully assigned default/busybox-mount to functional-313000
	  Normal  Pulling    46s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     45s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.483s (1.483s including waiting). Image size: 3547125 bytes.
	  Normal  Created    45s   kubelet            Created container mount-munger
	  Normal  Started    45s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (42.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (162.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 node stop m02 -v=7 --alsologtostderr
E0924 11:42:52.559181    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-978000 node stop m02 -v=7 --alsologtostderr: (12.164821s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
E0924 11:42:58.890799    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:43:13.042335    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:43:54.005421    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: (1m15.04416875s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
E0924 11:45:15.928406    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 3 (1m15.038924458s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 11:45:28.182226    3080 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0924 11:45:28.182263    3080 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (162.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (150.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m15.061935417s)
ha_test.go:413: expected profile "ha-978000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":
false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\
"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
E0924 11:47:31.160147    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:47:32.047577    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 3 (1m15.072308959s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 11:47:58.315098    3097 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0924 11:47:58.315147    3097 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (150.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (185.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 node start m02 -v=7 --alsologtostderr
E0924 11:47:59.771405    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.12124375s)

                                                
                                                
-- stdout --
	* Starting "ha-978000-m02" control-plane node in "ha-978000" cluster
	* Restarting existing qemu2 VM for "ha-978000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-978000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 11:47:58.394586    3103 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:47:58.394909    3103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:47:58.394914    3103 out.go:358] Setting ErrFile to fd 2...
	I0924 11:47:58.394917    3103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:47:58.395080    3103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 11:47:58.395387    3103 mustload.go:65] Loading cluster: ha-978000
	I0924 11:47:58.395704    3103 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0924 11:47:58.396007    3103 host.go:58] "ha-978000-m02" host status: Stopped
	I0924 11:47:58.400880    3103 out.go:177] * Starting "ha-978000-m02" control-plane node in "ha-978000" cluster
	I0924 11:47:58.402041    3103 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 11:47:58.402064    3103 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 11:47:58.402076    3103 cache.go:56] Caching tarball of preloaded images
	I0924 11:47:58.402198    3103 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 11:47:58.402209    3103 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 11:47:58.402283    3103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/ha-978000/config.json ...
	I0924 11:47:58.402671    3103 start.go:360] acquireMachinesLock for ha-978000-m02: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 11:47:58.402748    3103 start.go:364] duration metric: took 35.875µs to acquireMachinesLock for "ha-978000-m02"
	I0924 11:47:58.402758    3103 start.go:96] Skipping create...Using existing machine configuration
	I0924 11:47:58.402765    3103 fix.go:54] fixHost starting: m02
	I0924 11:47:58.402889    3103 fix.go:112] recreateIfNeeded on ha-978000-m02: state=Stopped err=<nil>
	W0924 11:47:58.402896    3103 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 11:47:58.406764    3103 out.go:177] * Restarting existing qemu2 VM for "ha-978000-m02" ...
	I0924 11:47:58.410776    3103 qemu.go:418] Using hvf for hardware acceleration
	I0924 11:47:58.410835    3103 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:45:ca:61:db:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/disk.qcow2
	I0924 11:47:58.413865    3103 main.go:141] libmachine: STDOUT: 
	I0924 11:47:58.413902    3103 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 11:47:58.413943    3103 fix.go:56] duration metric: took 11.175625ms for fixHost
	I0924 11:47:58.413949    3103 start.go:83] releasing machines lock for "ha-978000-m02", held for 11.195625ms
	W0924 11:47:58.413961    3103 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 11:47:58.414011    3103 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 11:47:58.414017    3103 start.go:729] Will try again in 5 seconds ...
	I0924 11:48:03.415929    3103 start.go:360] acquireMachinesLock for ha-978000-m02: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 11:48:03.416059    3103 start.go:364] duration metric: took 95.583µs to acquireMachinesLock for "ha-978000-m02"
	I0924 11:48:03.416096    3103 start.go:96] Skipping create...Using existing machine configuration
	I0924 11:48:03.416100    3103 fix.go:54] fixHost starting: m02
	I0924 11:48:03.416268    3103 fix.go:112] recreateIfNeeded on ha-978000-m02: state=Stopped err=<nil>
	W0924 11:48:03.416273    3103 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 11:48:03.418753    3103 out.go:177] * Restarting existing qemu2 VM for "ha-978000-m02" ...
	I0924 11:48:03.421775    3103 qemu.go:418] Using hvf for hardware acceleration
	I0924 11:48:03.421815    3103 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:45:ca:61:db:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/disk.qcow2
	I0924 11:48:03.424045    3103 main.go:141] libmachine: STDOUT: 
	I0924 11:48:03.424070    3103 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 11:48:03.424094    3103 fix.go:56] duration metric: took 7.99425ms for fixHost
	I0924 11:48:03.424098    3103 start.go:83] releasing machines lock for "ha-978000-m02", held for 8.028875ms
	W0924 11:48:03.424143    3103 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 11:48:03.427865    3103 out.go:201] 
	W0924 11:48:03.431767    3103 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 11:48:03.431776    3103 out.go:270] * 
	* 
	W0924 11:48:03.433489    3103 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 11:48:03.437808    3103 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0924 11:47:58.394586    3103 out.go:345] Setting OutFile to fd 1 ...
I0924 11:47:58.394909    3103 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:47:58.394914    3103 out.go:358] Setting ErrFile to fd 2...
I0924 11:47:58.394917    3103 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:47:58.395080    3103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
I0924 11:47:58.395387    3103 mustload.go:65] Loading cluster: ha-978000
I0924 11:47:58.395704    3103 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0924 11:47:58.396007    3103 host.go:58] "ha-978000-m02" host status: Stopped
I0924 11:47:58.400880    3103 out.go:177] * Starting "ha-978000-m02" control-plane node in "ha-978000" cluster
I0924 11:47:58.402041    3103 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0924 11:47:58.402064    3103 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0924 11:47:58.402076    3103 cache.go:56] Caching tarball of preloaded images
I0924 11:47:58.402198    3103 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0924 11:47:58.402209    3103 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0924 11:47:58.402283    3103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/ha-978000/config.json ...
I0924 11:47:58.402671    3103 start.go:360] acquireMachinesLock for ha-978000-m02: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0924 11:47:58.402748    3103 start.go:364] duration metric: took 35.875µs to acquireMachinesLock for "ha-978000-m02"
I0924 11:47:58.402758    3103 start.go:96] Skipping create...Using existing machine configuration
I0924 11:47:58.402765    3103 fix.go:54] fixHost starting: m02
I0924 11:47:58.402889    3103 fix.go:112] recreateIfNeeded on ha-978000-m02: state=Stopped err=<nil>
W0924 11:47:58.402896    3103 fix.go:138] unexpected machine state, will restart: <nil>
I0924 11:47:58.406764    3103 out.go:177] * Restarting existing qemu2 VM for "ha-978000-m02" ...
I0924 11:47:58.410776    3103 qemu.go:418] Using hvf for hardware acceleration
I0924 11:47:58.410835    3103 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:45:ca:61:db:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/disk.qcow2
I0924 11:47:58.413865    3103 main.go:141] libmachine: STDOUT: 
I0924 11:47:58.413902    3103 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0924 11:47:58.413943    3103 fix.go:56] duration metric: took 11.175625ms for fixHost
I0924 11:47:58.413949    3103 start.go:83] releasing machines lock for "ha-978000-m02", held for 11.195625ms
W0924 11:47:58.413961    3103 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0924 11:47:58.414011    3103 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0924 11:47:58.414017    3103 start.go:729] Will try again in 5 seconds ...
I0924 11:48:03.415929    3103 start.go:360] acquireMachinesLock for ha-978000-m02: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0924 11:48:03.416059    3103 start.go:364] duration metric: took 95.583µs to acquireMachinesLock for "ha-978000-m02"
I0924 11:48:03.416096    3103 start.go:96] Skipping create...Using existing machine configuration
I0924 11:48:03.416100    3103 fix.go:54] fixHost starting: m02
I0924 11:48:03.416268    3103 fix.go:112] recreateIfNeeded on ha-978000-m02: state=Stopped err=<nil>
W0924 11:48:03.416273    3103 fix.go:138] unexpected machine state, will restart: <nil>
I0924 11:48:03.418753    3103 out.go:177] * Restarting existing qemu2 VM for "ha-978000-m02" ...
I0924 11:48:03.421775    3103 qemu.go:418] Using hvf for hardware acceleration
I0924 11:48:03.421815    3103 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:45:ca:61:db:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000-m02/disk.qcow2
I0924 11:48:03.424045    3103 main.go:141] libmachine: STDOUT: 
I0924 11:48:03.424070    3103 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0924 11:48:03.424094    3103 fix.go:56] duration metric: took 7.99425ms for fixHost
I0924 11:48:03.424098    3103 start.go:83] releasing machines lock for "ha-978000-m02", held for 8.028875ms
W0924 11:48:03.424143    3103 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0924 11:48:03.427865    3103 out.go:201] 
W0924 11:48:03.431767    3103 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0924 11:48:03.431776    3103 out.go:270] * 
* 
W0924 11:48:03.433489    3103 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0924 11:48:03.437808    3103 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-978000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: (1m15.043799042s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (30.069591958s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: i/o timeout

                                                
                                                
** /stderr **
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 3 (1m15.042269791s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 11:51:03.593821    3119 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0924 11:51:03.593837    3119 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (185.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-978000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-978000 -v=7 --alsologtostderr
E0924 11:52:31.160323    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:52:32.046434    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:53:54.248697    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:57:31.153031    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:57:32.039249    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-978000 -v=7 --alsologtostderr: (5m27.1714585s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-978000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-978000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.232514209s)

                                                
                                                
-- stdout --
	* [ha-978000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-978000" primary control-plane node in "ha-978000" cluster
	* Restarting existing qemu2 VM for "ha-978000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-978000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 11:57:45.919751    3172 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:57:45.919958    3172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:57:45.919962    3172 out.go:358] Setting ErrFile to fd 2...
	I0924 11:57:45.919965    3172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:57:45.920127    3172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 11:57:45.921441    3172 out.go:352] Setting JSON to false
	I0924 11:57:45.942960    3172 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3436,"bootTime":1727200829,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 11:57:45.943035    3172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 11:57:45.947457    3172 out.go:177] * [ha-978000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 11:57:45.954413    3172 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 11:57:45.954452    3172 notify.go:220] Checking for updates...
	I0924 11:57:45.962364    3172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 11:57:45.965341    3172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 11:57:45.968365    3172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 11:57:45.975433    3172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 11:57:45.978311    3172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 11:57:45.981639    3172 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 11:57:45.981689    3172 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 11:57:45.986426    3172 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 11:57:45.993333    3172 start.go:297] selected driver: qemu2
	I0924 11:57:45.993340    3172 start.go:901] validating driver "qemu2" against &{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-978000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:57:45.993411    3172 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 11:57:45.996478    3172 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 11:57:45.996507    3172 cni.go:84] Creating CNI manager for ""
	I0924 11:57:45.996539    3172 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0924 11:57:45.996602    3172 start.go:340] cluster config:
	{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-978000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:57:46.001292    3172 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 11:57:46.008404    3172 out.go:177] * Starting "ha-978000" primary control-plane node in "ha-978000" cluster
	I0924 11:57:46.012385    3172 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 11:57:46.012402    3172 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 11:57:46.012410    3172 cache.go:56] Caching tarball of preloaded images
	I0924 11:57:46.012480    3172 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 11:57:46.012486    3172 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 11:57:46.012566    3172 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/ha-978000/config.json ...
	I0924 11:57:46.013021    3172 start.go:360] acquireMachinesLock for ha-978000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 11:57:46.013056    3172 start.go:364] duration metric: took 28.584µs to acquireMachinesLock for "ha-978000"
	I0924 11:57:46.013067    3172 start.go:96] Skipping create...Using existing machine configuration
	I0924 11:57:46.013072    3172 fix.go:54] fixHost starting: 
	I0924 11:57:46.013204    3172 fix.go:112] recreateIfNeeded on ha-978000: state=Stopped err=<nil>
	W0924 11:57:46.013213    3172 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 11:57:46.017357    3172 out.go:177] * Restarting existing qemu2 VM for "ha-978000" ...
	I0924 11:57:46.024270    3172 qemu.go:418] Using hvf for hardware acceleration
	I0924 11:57:46.024307    3172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:17:2e:7a:a1:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/disk.qcow2
	I0924 11:57:46.026342    3172 main.go:141] libmachine: STDOUT: 
	I0924 11:57:46.026362    3172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 11:57:46.026394    3172 fix.go:56] duration metric: took 13.319792ms for fixHost
	I0924 11:57:46.026399    3172 start.go:83] releasing machines lock for "ha-978000", held for 13.33775ms
	W0924 11:57:46.026405    3172 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 11:57:46.026435    3172 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 11:57:46.026440    3172 start.go:729] Will try again in 5 seconds ...
	I0924 11:57:51.028736    3172 start.go:360] acquireMachinesLock for ha-978000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 11:57:51.029101    3172 start.go:364] duration metric: took 271.209µs to acquireMachinesLock for "ha-978000"
	I0924 11:57:51.029228    3172 start.go:96] Skipping create...Using existing machine configuration
	I0924 11:57:51.029245    3172 fix.go:54] fixHost starting: 
	I0924 11:57:51.029941    3172 fix.go:112] recreateIfNeeded on ha-978000: state=Stopped err=<nil>
	W0924 11:57:51.029967    3172 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 11:57:51.034391    3172 out.go:177] * Restarting existing qemu2 VM for "ha-978000" ...
	I0924 11:57:51.042258    3172 qemu.go:418] Using hvf for hardware acceleration
	I0924 11:57:51.042448    3172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:17:2e:7a:a1:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/disk.qcow2
	I0924 11:57:51.051137    3172 main.go:141] libmachine: STDOUT: 
	I0924 11:57:51.051194    3172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 11:57:51.051264    3172 fix.go:56] duration metric: took 22.020625ms for fixHost
	I0924 11:57:51.051278    3172 start.go:83] releasing machines lock for "ha-978000", held for 22.152541ms
	W0924 11:57:51.051440    3172 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 11:57:51.058220    3172 out.go:201] 
	W0924 11:57:51.062321    3172 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 11:57:51.062349    3172 out.go:270] * 
	* 
	W0924 11:57:51.064802    3172 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 11:57:51.074184    3172 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-978000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-978000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (33.764292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.597667ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-978000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-978000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 11:57:51.215788    3184 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:57:51.216014    3184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:57:51.216018    3184 out.go:358] Setting ErrFile to fd 2...
	I0924 11:57:51.216020    3184 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:57:51.216148    3184 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 11:57:51.216381    3184 mustload.go:65] Loading cluster: ha-978000
	I0924 11:57:51.216621    3184 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0924 11:57:51.216937    3184 out.go:270] ! The control-plane node ha-978000 host is not running (will try others): state=Stopped
	! The control-plane node ha-978000 host is not running (will try others): state=Stopped
	W0924 11:57:51.217040    3184 out.go:270] ! The control-plane node ha-978000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-978000-m02 host is not running (will try others): state=Stopped
	I0924 11:57:51.221737    3184 out.go:177] * The control-plane node ha-978000-m03 host is not running: state=Stopped
	I0924 11:57:51.224733    3184 out.go:177]   To start a cluster, run: "minikube start -p ha-978000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-978000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (31.360917ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-978000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-978000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-978000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 11:57:51.257816    3186 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:57:51.257991    3186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:57:51.257994    3186 out.go:358] Setting ErrFile to fd 2...
	I0924 11:57:51.257996    3186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:57:51.258121    3186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 11:57:51.258253    3186 out.go:352] Setting JSON to false
	I0924 11:57:51.258264    3186 mustload.go:65] Loading cluster: ha-978000
	I0924 11:57:51.258310    3186 notify.go:220] Checking for updates...
	I0924 11:57:51.258484    3186 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 11:57:51.258493    3186 status.go:174] checking status of ha-978000 ...
	I0924 11:57:51.258729    3186 status.go:364] ha-978000 host status = "Stopped" (err=<nil>)
	I0924 11:57:51.258732    3186 status.go:377] host is not running, skipping remaining checks
	I0924 11:57:51.258734    3186 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 11:57:51.258744    3186 status.go:174] checking status of ha-978000-m02 ...
	I0924 11:57:51.258837    3186 status.go:364] ha-978000-m02 host status = "Stopped" (err=<nil>)
	I0924 11:57:51.258840    3186 status.go:377] host is not running, skipping remaining checks
	I0924 11:57:51.258841    3186 status.go:176] ha-978000-m02 status: &{Name:ha-978000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 11:57:51.258845    3186 status.go:174] checking status of ha-978000-m03 ...
	I0924 11:57:51.258932    3186 status.go:364] ha-978000-m03 host status = "Stopped" (err=<nil>)
	I0924 11:57:51.258935    3186 status.go:377] host is not running, skipping remaining checks
	I0924 11:57:51.258936    3186 status.go:176] ha-978000-m03 status: &{Name:ha-978000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 11:57:51.258940    3186 status.go:174] checking status of ha-978000-m04 ...
	I0924 11:57:51.259038    3186 status.go:364] ha-978000-m04 host status = "Stopped" (err=<nil>)
	I0924 11:57:51.259041    3186 status.go:377] host is not running, skipping remaining checks
	I0924 11:57:51.259043    3186 status.go:176] ha-978000-m04 status: &{Name:ha-978000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (30.94675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-978000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (30.873958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (300.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 stop -v=7 --alsologtostderr
E0924 11:58:55.125396    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 12:02:31.151539    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 12:02:32.037746    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-978000 stop -v=7 --alsologtostderr: (5m0.132385166s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr: exit status 7 (65.9115ms)

                                                
                                                
-- stdout --
	ha-978000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-978000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-978000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-978000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:02:51.565068    3585 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:02:51.565290    3585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:02:51.565295    3585 out.go:358] Setting ErrFile to fd 2...
	I0924 12:02:51.565298    3585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:02:51.565469    3585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:02:51.565644    3585 out.go:352] Setting JSON to false
	I0924 12:02:51.565658    3585 mustload.go:65] Loading cluster: ha-978000
	I0924 12:02:51.565699    3585 notify.go:220] Checking for updates...
	I0924 12:02:51.565979    3585 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:02:51.565992    3585 status.go:174] checking status of ha-978000 ...
	I0924 12:02:51.566317    3585 status.go:364] ha-978000 host status = "Stopped" (err=<nil>)
	I0924 12:02:51.566321    3585 status.go:377] host is not running, skipping remaining checks
	I0924 12:02:51.566324    3585 status.go:176] ha-978000 status: &{Name:ha-978000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 12:02:51.566338    3585 status.go:174] checking status of ha-978000-m02 ...
	I0924 12:02:51.566471    3585 status.go:364] ha-978000-m02 host status = "Stopped" (err=<nil>)
	I0924 12:02:51.566475    3585 status.go:377] host is not running, skipping remaining checks
	I0924 12:02:51.566477    3585 status.go:176] ha-978000-m02 status: &{Name:ha-978000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 12:02:51.566482    3585 status.go:174] checking status of ha-978000-m03 ...
	I0924 12:02:51.566617    3585 status.go:364] ha-978000-m03 host status = "Stopped" (err=<nil>)
	I0924 12:02:51.566620    3585 status.go:377] host is not running, skipping remaining checks
	I0924 12:02:51.566623    3585 status.go:176] ha-978000-m03 status: &{Name:ha-978000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 12:02:51.566627    3585 status.go:174] checking status of ha-978000-m04 ...
	I0924 12:02:51.566751    3585 status.go:364] ha-978000-m04 host status = "Stopped" (err=<nil>)
	I0924 12:02:51.566754    3585 status.go:377] host is not running, skipping remaining checks
	I0924 12:02:51.566757    3585 status.go:176] ha-978000-m04 status: &{Name:ha-978000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": ha-978000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-978000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-978000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-978000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": ha-978000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-978000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-978000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-978000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr": ha-978000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-978000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-978000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-978000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (33.252167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (300.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-978000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-978000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.181110709s)

                                                
                                                
-- stdout --
	* [ha-978000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-978000" primary control-plane node in "ha-978000" cluster
	* Restarting existing qemu2 VM for "ha-978000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-978000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:02:51.630432    3589 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:02:51.630599    3589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:02:51.630602    3589 out.go:358] Setting ErrFile to fd 2...
	I0924 12:02:51.630605    3589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:02:51.630724    3589 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:02:51.631884    3589 out.go:352] Setting JSON to false
	I0924 12:02:51.647936    3589 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3742,"bootTime":1727200829,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:02:51.648017    3589 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:02:51.653491    3589 out.go:177] * [ha-978000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:02:51.660396    3589 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:02:51.660451    3589 notify.go:220] Checking for updates...
	I0924 12:02:51.667305    3589 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:02:51.670372    3589 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:02:51.673370    3589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:02:51.676379    3589 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:02:51.679366    3589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:02:51.682758    3589 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:02:51.683054    3589 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:02:51.687303    3589 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:02:51.694363    3589 start.go:297] selected driver: qemu2
	I0924 12:02:51.694370    3589 start.go:901] validating driver "qemu2" against &{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-978000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:02:51.694447    3589 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:02:51.696927    3589 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:02:51.696953    3589 cni.go:84] Creating CNI manager for ""
	I0924 12:02:51.696978    3589 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0924 12:02:51.697029    3589 start.go:340] cluster config:
	{Name:ha-978000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-978000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:02:51.700621    3589 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:02:51.708370    3589 out.go:177] * Starting "ha-978000" primary control-plane node in "ha-978000" cluster
	I0924 12:02:51.712299    3589 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:02:51.712316    3589 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:02:51.712324    3589 cache.go:56] Caching tarball of preloaded images
	I0924 12:02:51.712387    3589 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:02:51.712394    3589 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:02:51.712483    3589 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/ha-978000/config.json ...
	I0924 12:02:51.712904    3589 start.go:360] acquireMachinesLock for ha-978000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:02:51.712937    3589 start.go:364] duration metric: took 27.416µs to acquireMachinesLock for "ha-978000"
	I0924 12:02:51.712947    3589 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:02:51.712951    3589 fix.go:54] fixHost starting: 
	I0924 12:02:51.713066    3589 fix.go:112] recreateIfNeeded on ha-978000: state=Stopped err=<nil>
	W0924 12:02:51.713073    3589 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:02:51.717419    3589 out.go:177] * Restarting existing qemu2 VM for "ha-978000" ...
	I0924 12:02:51.725325    3589 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:02:51.725366    3589 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:17:2e:7a:a1:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/disk.qcow2
	I0924 12:02:51.727408    3589 main.go:141] libmachine: STDOUT: 
	I0924 12:02:51.727428    3589 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:02:51.727455    3589 fix.go:56] duration metric: took 14.501459ms for fixHost
	I0924 12:02:51.727461    3589 start.go:83] releasing machines lock for "ha-978000", held for 14.519125ms
	W0924 12:02:51.727468    3589 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:02:51.727504    3589 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:02:51.727508    3589 start.go:729] Will try again in 5 seconds ...
	I0924 12:02:56.729706    3589 start.go:360] acquireMachinesLock for ha-978000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:02:56.730151    3589 start.go:364] duration metric: took 334.833µs to acquireMachinesLock for "ha-978000"
	I0924 12:02:56.730287    3589 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:02:56.730306    3589 fix.go:54] fixHost starting: 
	I0924 12:02:56.730996    3589 fix.go:112] recreateIfNeeded on ha-978000: state=Stopped err=<nil>
	W0924 12:02:56.731022    3589 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:02:56.735446    3589 out.go:177] * Restarting existing qemu2 VM for "ha-978000" ...
	I0924 12:02:56.739322    3589 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:02:56.739488    3589 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:17:2e:7a:a1:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/ha-978000/disk.qcow2
	I0924 12:02:56.748264    3589 main.go:141] libmachine: STDOUT: 
	I0924 12:02:56.748330    3589 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:02:56.748389    3589 fix.go:56] duration metric: took 18.0865ms for fixHost
	I0924 12:02:56.748408    3589 start.go:83] releasing machines lock for "ha-978000", held for 18.232041ms
	W0924 12:02:56.748574    3589 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-978000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:02:56.755350    3589 out.go:201] 
	W0924 12:02:56.759399    3589 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:02:56.759432    3589 out.go:270] * 
	* 
	W0924 12:02:56.761892    3589 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:02:56.774408    3589 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-978000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (71.166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-978000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-978000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-978000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-978000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (31.663292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-978000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-978000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.125292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-978000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-978000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:02:56.967441    3604 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:02:56.967623    3604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:02:56.967626    3604 out.go:358] Setting ErrFile to fd 2...
	I0924 12:02:56.967628    3604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:02:56.967756    3604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:02:56.968019    3604 mustload.go:65] Loading cluster: ha-978000
	I0924 12:02:56.968260    3604 config.go:182] Loaded profile config "ha-978000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0924 12:02:56.968562    3604 out.go:270] ! The control-plane node ha-978000 host is not running (will try others): state=Stopped
	! The control-plane node ha-978000 host is not running (will try others): state=Stopped
	W0924 12:02:56.968660    3604 out.go:270] ! The control-plane node ha-978000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-978000-m02 host is not running (will try others): state=Stopped
	I0924 12:02:56.971506    3604 out.go:177] * The control-plane node ha-978000-m03 host is not running: state=Stopped
	I0924 12:02:56.975403    3604 out.go:177]   To start a cluster, run: "minikube start -p ha-978000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-978000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-978000 -n ha-978000: exit status 7 (30.360833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-978000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-293000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-293000 --driver=qemu2 : exit status 80 (9.884132458s)

                                                
                                                
-- stdout --
	* [image-293000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-293000" primary control-plane node in "image-293000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-293000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-293000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-293000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-293000 -n image-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-293000 -n image-293000: exit status 7 (68.82475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-650000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-650000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.830666625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d095870-613c-443d-8bf6-9672592c1f1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-650000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1a8c9e6-3d88-4568-a0e0-b5751f9ef262","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19700"}}
	{"specversion":"1.0","id":"59f59116-2b95-44d8-886a-99b8548815b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig"}}
	{"specversion":"1.0","id":"b6decb2b-7fb6-4aaa-b1e8-19b260d28327","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"43cddde8-f6e4-4779-9ee6-de058ff0722e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d058043-5d66-4450-ac49-ffa31b3d6dc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube"}}
	{"specversion":"1.0","id":"fff45a9f-b91e-4d8b-9264-aa81f7331aae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7df12b38-1184-4962-a70b-04d08ff42f28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a4f73bc-883f-435b-bbe1-37cc4aebe602","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"d72084b8-6a61-4519-b8f2-05e15b0c403f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-650000\" primary control-plane node in \"json-output-650000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"918ecf39-642c-46a3-8a44-ace830bcea80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"77d495ba-3a56-4f0b-9630-8bd4294f5ca4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-650000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"21eaaa09-d5da-4d32-870f-1165812f641c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"c515d188-823e-46b3-9a95-1d4a59ad2cbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"edaf8b0c-e981-404c-901f-d4971bd59ebf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-650000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a7ea8e0e-f666-4535-aa66-20122b575e6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"867c7954-c464-4602-a928-f477183ddae9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-650000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-650000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-650000 --output=json --user=testUser: exit status 83 (76.697666ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb513a74-9e32-4cc8-a318-db2a581de626","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-650000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"223ac734-3083-461a-8e6a-09058a031b40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-650000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-650000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-650000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-650000 --output=json --user=testUser: exit status 83 (44.923375ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-650000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-650000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-650000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-650000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-530000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-530000 --driver=qemu2 : exit status 80 (9.917648625s)

                                                
                                                
-- stdout --
	* [first-530000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-530000" primary control-plane node in "first-530000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-530000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-530000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-530000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-24 12:03:30.950734 -0700 PDT m=+2686.885204292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-531000 -n second-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-531000 -n second-531000: exit status 85 (81.602458ms)

                                                
                                                
-- stdout --
	* Profile "second-531000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-531000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-531000" host is not running, skipping log retrieval (state="* Profile \"second-531000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-531000\"")
helpers_test.go:175: Cleaning up "second-531000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-531000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-24 12:03:31.14449 -0700 PDT m=+2687.078960626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-530000 -n first-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-530000 -n first-530000: exit status 7 (30.725916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-530000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-530000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-530000
--- FAIL: TestMinikubeProfile (10.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-531000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-531000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.90198625s)

                                                
                                                
-- stdout --
	* [mount-start-1-531000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-531000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-531000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-531000 -n mount-start-1-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-531000 -n mount-start-1-531000: exit status 7 (68.432667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-504000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-504000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.775682333s)

                                                
                                                
-- stdout --
	* [multinode-504000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-504000" primary control-plane node in "multinode-504000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-504000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:03:41.438779    3748 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:03:41.438896    3748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:03:41.438899    3748 out.go:358] Setting ErrFile to fd 2...
	I0924 12:03:41.438902    3748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:03:41.439031    3748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:03:41.440071    3748 out.go:352] Setting JSON to false
	I0924 12:03:41.456163    3748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3792,"bootTime":1727200829,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:03:41.456239    3748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:03:41.462580    3748 out.go:177] * [multinode-504000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:03:41.471530    3748 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:03:41.471595    3748 notify.go:220] Checking for updates...
	I0924 12:03:41.480449    3748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:03:41.483518    3748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:03:41.486432    3748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:03:41.489621    3748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:03:41.492505    3748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:03:41.495617    3748 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:03:41.499435    3748 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:03:41.506462    3748 start.go:297] selected driver: qemu2
	I0924 12:03:41.506470    3748 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:03:41.506480    3748 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:03:41.508705    3748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:03:41.512487    3748 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:03:41.515536    3748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:03:41.515556    3748 cni.go:84] Creating CNI manager for ""
	I0924 12:03:41.515575    3748 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0924 12:03:41.515579    3748 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 12:03:41.515620    3748 start.go:340] cluster config:
	{Name:multinode-504000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:03:41.519225    3748 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:03:41.526452    3748 out.go:177] * Starting "multinode-504000" primary control-plane node in "multinode-504000" cluster
	I0924 12:03:41.530482    3748 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:03:41.530499    3748 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:03:41.530508    3748 cache.go:56] Caching tarball of preloaded images
	I0924 12:03:41.530597    3748 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:03:41.530604    3748 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:03:41.530829    3748 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/multinode-504000/config.json ...
	I0924 12:03:41.530841    3748 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/multinode-504000/config.json: {Name:mk8147c62895a315571c7455348e13c7a7cb15c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:03:41.531068    3748 start.go:360] acquireMachinesLock for multinode-504000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:03:41.531104    3748 start.go:364] duration metric: took 29.834µs to acquireMachinesLock for "multinode-504000"
	I0924 12:03:41.531118    3748 start.go:93] Provisioning new machine with config: &{Name:multinode-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:03:41.531152    3748 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:03:41.539474    3748 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:03:41.557879    3748 start.go:159] libmachine.API.Create for "multinode-504000" (driver="qemu2")
	I0924 12:03:41.557914    3748 client.go:168] LocalClient.Create starting
	I0924 12:03:41.557990    3748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:03:41.558022    3748 main.go:141] libmachine: Decoding PEM data...
	I0924 12:03:41.558031    3748 main.go:141] libmachine: Parsing certificate...
	I0924 12:03:41.558066    3748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:03:41.558096    3748 main.go:141] libmachine: Decoding PEM data...
	I0924 12:03:41.558105    3748 main.go:141] libmachine: Parsing certificate...
	I0924 12:03:41.558455    3748 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:03:41.716593    3748 main.go:141] libmachine: Creating SSH key...
	I0924 12:03:41.762449    3748 main.go:141] libmachine: Creating Disk image...
	I0924 12:03:41.762455    3748 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:03:41.762625    3748 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2
	I0924 12:03:41.771618    3748 main.go:141] libmachine: STDOUT: 
	I0924 12:03:41.771632    3748 main.go:141] libmachine: STDERR: 
	I0924 12:03:41.771695    3748 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2 +20000M
	I0924 12:03:41.779369    3748 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:03:41.779380    3748 main.go:141] libmachine: STDERR: 
	I0924 12:03:41.779391    3748 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2
	I0924 12:03:41.779395    3748 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:03:41.779412    3748 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:03:41.779442    3748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:6f:21:bc:8b:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2
	I0924 12:03:41.780993    3748 main.go:141] libmachine: STDOUT: 
	I0924 12:03:41.781010    3748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:03:41.781033    3748 client.go:171] duration metric: took 223.113ms to LocalClient.Create
	I0924 12:03:43.783224    3748 start.go:128] duration metric: took 2.252062042s to createHost
	I0924 12:03:43.783344    3748 start.go:83] releasing machines lock for "multinode-504000", held for 2.252241375s
	W0924 12:03:43.783402    3748 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:03:43.799479    3748 out.go:177] * Deleting "multinode-504000" in qemu2 ...
	W0924 12:03:43.829732    3748 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:03:43.829752    3748 start.go:729] Will try again in 5 seconds ...
	I0924 12:03:48.831897    3748 start.go:360] acquireMachinesLock for multinode-504000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:03:48.832446    3748 start.go:364] duration metric: took 446.125µs to acquireMachinesLock for "multinode-504000"
	I0924 12:03:48.832602    3748 start.go:93] Provisioning new machine with config: &{Name:multinode-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:03:48.832822    3748 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:03:48.850473    3748 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:03:48.901259    3748 start.go:159] libmachine.API.Create for "multinode-504000" (driver="qemu2")
	I0924 12:03:48.901321    3748 client.go:168] LocalClient.Create starting
	I0924 12:03:48.901457    3748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:03:48.901531    3748 main.go:141] libmachine: Decoding PEM data...
	I0924 12:03:48.901552    3748 main.go:141] libmachine: Parsing certificate...
	I0924 12:03:48.901617    3748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:03:48.901661    3748 main.go:141] libmachine: Decoding PEM data...
	I0924 12:03:48.901674    3748 main.go:141] libmachine: Parsing certificate...
	I0924 12:03:48.902179    3748 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:03:49.068217    3748 main.go:141] libmachine: Creating SSH key...
	I0924 12:03:49.115819    3748 main.go:141] libmachine: Creating Disk image...
	I0924 12:03:49.115824    3748 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:03:49.115999    3748 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2
	I0924 12:03:49.125174    3748 main.go:141] libmachine: STDOUT: 
	I0924 12:03:49.125196    3748 main.go:141] libmachine: STDERR: 
	I0924 12:03:49.125247    3748 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2 +20000M
	I0924 12:03:49.132984    3748 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:03:49.132999    3748 main.go:141] libmachine: STDERR: 
	I0924 12:03:49.133008    3748 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2
	I0924 12:03:49.133012    3748 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:03:49.133032    3748 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:03:49.133059    3748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:4c:d7:1d:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2
	I0924 12:03:49.134604    3748 main.go:141] libmachine: STDOUT: 
	I0924 12:03:49.134621    3748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:03:49.134635    3748 client.go:171] duration metric: took 233.301541ms to LocalClient.Create
	I0924 12:03:51.136797    3748 start.go:128] duration metric: took 2.303948416s to createHost
	I0924 12:03:51.136872    3748 start.go:83] releasing machines lock for "multinode-504000", held for 2.304400375s
	W0924 12:03:51.137349    3748 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-504000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-504000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:03:51.153854    3748 out.go:201] 
	W0924 12:03:51.157007    3748 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:03:51.157032    3748 out.go:270] * 
	* 
	W0924 12:03:51.159752    3748 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:03:51.172887    3748 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-504000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (69.251333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (73.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (128.260292ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-504000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- rollout status deployment/busybox: exit status 1 (58.576417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.10225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0924 12:03:51.503167    1598 retry.go:31] will retry after 1.050205425s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.392709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0924 12:03:52.662102    1598 retry.go:31] will retry after 2.244623093s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.289292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0924 12:03:55.013452    1598 retry.go:31] will retry after 2.039779233s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.652875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0924 12:03:57.160229    1598 retry.go:31] will retry after 4.831557697s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.039792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0924 12:04:02.096249    1598 retry.go:31] will retry after 4.467699901s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.220625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0924 12:04:06.669711    1598 retry.go:31] will retry after 9.322688182s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.032666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0924 12:04:16.098883    1598 retry.go:31] will retry after 13.13703011s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.856709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0924 12:04:29.342210    1598 retry.go:31] will retry after 14.053650289s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.00575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0924 12:04:43.502235    1598 retry.go:31] will retry after 21.339921131s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.413ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.679792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.156709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.725333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.542917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (30.738417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (73.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-504000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.278333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (30.5ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-504000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-504000 -v 3 --alsologtostderr: exit status 83 (41.071ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-504000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-504000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:05.325016    3833 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:05.325169    3833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:05.325173    3833 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:05.325175    3833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:05.325296    3833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:05.325549    3833 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:05.325754    3833 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:05.330890    3833 out.go:177] * The control-plane node multinode-504000 host is not running: state=Stopped
	I0924 12:05:05.333862    3833 out.go:177]   To start a cluster, run: "minikube start -p multinode-504000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-504000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (30.681333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-504000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-504000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.370792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-504000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-504000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-504000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (31.17975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-504000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-504000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-504000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-504000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (30.582792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status --output json --alsologtostderr: exit status 7 (30.3855ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-504000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:05.536317    3845 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:05.536475    3845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:05.536478    3845 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:05.536480    3845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:05.536623    3845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:05.536732    3845 out.go:352] Setting JSON to true
	I0924 12:05:05.536744    3845 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:05.536808    3845 notify.go:220] Checking for updates...
	I0924 12:05:05.536946    3845 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:05.536954    3845 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:05.537188    3845 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:05.537193    3845 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:05.537195    3845 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-504000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (30.769791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 node stop m03: exit status 85 (48.220667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-504000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status: exit status 7 (31.297792ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status --alsologtostderr: exit status 7 (29.96825ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:05.677357    3853 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:05.677522    3853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:05.677525    3853 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:05.677540    3853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:05.677668    3853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:05.677777    3853 out.go:352] Setting JSON to false
	I0924 12:05:05.677788    3853 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:05.677843    3853 notify.go:220] Checking for updates...
	I0924 12:05:05.678025    3853 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:05.678033    3853 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:05.678252    3853 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:05.678256    3853 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:05.678258    3853 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-504000 status --alsologtostderr": multinode-504000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (30.871125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.397708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:05.739396    3857 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:05.739662    3857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:05.739666    3857 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:05.739668    3857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:05.739819    3857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:05.740046    3857 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:05.740283    3857 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:05.743963    3857 out.go:201] 
	W0924 12:05:05.746875    3857 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0924 12:05:05.746880    3857 out.go:270] * 
	* 
	W0924 12:05:05.748736    3857 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:05:05.751848    3857 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0924 12:05:05.739396    3857 out.go:345] Setting OutFile to fd 1 ...
I0924 12:05:05.739662    3857 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 12:05:05.739666    3857 out.go:358] Setting ErrFile to fd 2...
I0924 12:05:05.739668    3857 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 12:05:05.739819    3857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
I0924 12:05:05.740046    3857 mustload.go:65] Loading cluster: multinode-504000
I0924 12:05:05.740283    3857 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 12:05:05.743963    3857 out.go:201] 
W0924 12:05:05.746875    3857 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0924 12:05:05.746880    3857 out.go:270] * 
* 
W0924 12:05:05.748736    3857 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0924 12:05:05.751848    3857 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-504000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr: exit status 7 (31.241625ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:05.785315    3859 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:05.785474    3859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:05.785477    3859 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:05.785480    3859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:05.785608    3859 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:05.785732    3859 out.go:352] Setting JSON to false
	I0924 12:05:05.785743    3859 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:05.785810    3859 notify.go:220] Checking for updates...
	I0924 12:05:05.785964    3859 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:05.785976    3859 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:05.786209    3859 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:05.786212    3859 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:05.786215    3859 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0924 12:05:05.787076    1598 retry.go:31] will retry after 868.765109ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr: exit status 7 (76.169167ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:06.732211    3861 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:06.732432    3861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:06.732436    3861 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:06.732440    3861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:06.732624    3861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:06.732782    3861 out.go:352] Setting JSON to false
	I0924 12:05:06.732795    3861 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:06.732844    3861 notify.go:220] Checking for updates...
	I0924 12:05:06.733068    3861 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:06.733079    3861 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:06.733403    3861 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:06.733407    3861 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:06.733410    3861 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0924 12:05:06.734458    1598 retry.go:31] will retry after 1.874161276s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr: exit status 7 (73.756334ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:08.682547    3863 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:08.682754    3863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:08.682758    3863 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:08.682761    3863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:08.682940    3863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:08.683098    3863 out.go:352] Setting JSON to false
	I0924 12:05:08.683113    3863 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:08.683166    3863 notify.go:220] Checking for updates...
	I0924 12:05:08.683371    3863 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:08.683384    3863 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:08.683686    3863 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:08.683691    3863 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:08.683694    3863 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0924 12:05:08.684766    1598 retry.go:31] will retry after 2.418733038s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr: exit status 7 (74.961833ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:11.178528    3865 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:11.178752    3865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:11.178757    3865 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:11.178760    3865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:11.178942    3865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:11.179116    3865 out.go:352] Setting JSON to false
	I0924 12:05:11.179130    3865 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:11.179176    3865 notify.go:220] Checking for updates...
	I0924 12:05:11.179410    3865 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:11.179421    3865 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:11.179765    3865 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:11.179770    3865 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:11.179773    3865 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0924 12:05:11.180849    1598 retry.go:31] will retry after 3.194976147s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr: exit status 7 (74.466ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:14.450445    3870 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:14.450656    3870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:14.450661    3870 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:14.450664    3870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:14.450870    3870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:14.451050    3870 out.go:352] Setting JSON to false
	I0924 12:05:14.451065    3870 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:14.451109    3870 notify.go:220] Checking for updates...
	I0924 12:05:14.451320    3870 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:14.451333    3870 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:14.451633    3870 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:14.451638    3870 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:14.451641    3870 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0924 12:05:14.452705    1598 retry.go:31] will retry after 3.848225823s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr: exit status 7 (73.635542ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:18.374710    3872 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:18.374933    3872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:18.374938    3872 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:18.374942    3872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:18.375125    3872 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:18.375266    3872 out.go:352] Setting JSON to false
	I0924 12:05:18.375282    3872 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:18.375321    3872 notify.go:220] Checking for updates...
	I0924 12:05:18.375527    3872 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:18.375537    3872 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:18.375842    3872 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:18.375847    3872 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:18.375850    3872 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0924 12:05:18.376873    1598 retry.go:31] will retry after 5.49353708s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr: exit status 7 (74.16825ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:23.944698    3874 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:23.944889    3874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:23.944894    3874 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:23.944897    3874 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:23.945076    3874 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:23.945239    3874 out.go:352] Setting JSON to false
	I0924 12:05:23.945252    3874 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:23.945290    3874 notify.go:220] Checking for updates...
	I0924 12:05:23.945544    3874 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:23.945554    3874 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:23.945857    3874 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:23.945862    3874 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:23.945865    3874 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0924 12:05:23.946965    1598 retry.go:31] will retry after 7.648788136s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr: exit status 7 (72.044875ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:31.667921    3876 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:31.668135    3876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:31.668140    3876 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:31.668143    3876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:31.668319    3876 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:31.668483    3876 out.go:352] Setting JSON to false
	I0924 12:05:31.668498    3876 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:31.668545    3876 notify.go:220] Checking for updates...
	I0924 12:05:31.668771    3876 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:31.668782    3876 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:31.669104    3876 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:31.669108    3876 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:31.669111    3876 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0924 12:05:31.670213    1598 retry.go:31] will retry after 19.671598059s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr: exit status 7 (75.843417ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:51.417810    3886 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:51.418037    3886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:51.418041    3886 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:51.418044    3886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:51.418231    3886 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:51.418383    3886 out.go:352] Setting JSON to false
	I0924 12:05:51.418398    3886 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:05:51.418432    3886 notify.go:220] Checking for updates...
	I0924 12:05:51.418679    3886 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:51.418689    3886 status.go:174] checking status of multinode-504000 ...
	I0924 12:05:51.419020    3886 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:05:51.419025    3886 status.go:377] host is not running, skipping remaining checks
	I0924 12:05:51.419028    3886 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-504000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (33.298375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-504000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-504000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-504000: (3.648975958s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-504000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-504000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.217346208s)

                                                
                                                
-- stdout --
	* [multinode-504000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-504000" primary control-plane node in "multinode-504000" cluster
	* Restarting existing qemu2 VM for "multinode-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:05:55.197958    3910 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:05:55.198115    3910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:55.198120    3910 out.go:358] Setting ErrFile to fd 2...
	I0924 12:05:55.198123    3910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:05:55.198289    3910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:05:55.199558    3910 out.go:352] Setting JSON to false
	I0924 12:05:55.218759    3910 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3926,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:05:55.218831    3910 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:05:55.223565    3910 out.go:177] * [multinode-504000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:05:55.229479    3910 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:05:55.229518    3910 notify.go:220] Checking for updates...
	I0924 12:05:55.236531    3910 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:05:55.240469    3910 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:05:55.243501    3910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:05:55.247494    3910 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:05:55.250421    3910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:05:55.253781    3910 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:05:55.253835    3910 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:05:55.258464    3910 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:05:55.265495    3910 start.go:297] selected driver: qemu2
	I0924 12:05:55.265503    3910 start.go:901] validating driver "qemu2" against &{Name:multinode-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:05:55.265568    3910 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:05:55.267920    3910 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:05:55.267946    3910 cni.go:84] Creating CNI manager for ""
	I0924 12:05:55.267975    3910 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 12:05:55.268017    3910 start.go:340] cluster config:
	{Name:multinode-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-504000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:05:55.271728    3910 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:05:55.277455    3910 out.go:177] * Starting "multinode-504000" primary control-plane node in "multinode-504000" cluster
	I0924 12:05:55.281507    3910 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:05:55.281523    3910 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:05:55.281534    3910 cache.go:56] Caching tarball of preloaded images
	I0924 12:05:55.281596    3910 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:05:55.281602    3910 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:05:55.281653    3910 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/multinode-504000/config.json ...
	I0924 12:05:55.282105    3910 start.go:360] acquireMachinesLock for multinode-504000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:05:55.282140    3910 start.go:364] duration metric: took 29.125µs to acquireMachinesLock for "multinode-504000"
	I0924 12:05:55.282151    3910 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:05:55.282156    3910 fix.go:54] fixHost starting: 
	I0924 12:05:55.282280    3910 fix.go:112] recreateIfNeeded on multinode-504000: state=Stopped err=<nil>
	W0924 12:05:55.282289    3910 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:05:55.290494    3910 out.go:177] * Restarting existing qemu2 VM for "multinode-504000" ...
	I0924 12:05:55.293464    3910 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:05:55.293506    3910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:4c:d7:1d:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2
	I0924 12:05:55.295609    3910 main.go:141] libmachine: STDOUT: 
	I0924 12:05:55.295626    3910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:05:55.295656    3910 fix.go:56] duration metric: took 13.497958ms for fixHost
	I0924 12:05:55.295662    3910 start.go:83] releasing machines lock for "multinode-504000", held for 13.516833ms
	W0924 12:05:55.295668    3910 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:05:55.295698    3910 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:05:55.295703    3910 start.go:729] Will try again in 5 seconds ...
	I0924 12:06:00.297929    3910 start.go:360] acquireMachinesLock for multinode-504000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:06:00.298284    3910 start.go:364] duration metric: took 271.167µs to acquireMachinesLock for "multinode-504000"
	I0924 12:06:00.298415    3910 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:06:00.298436    3910 fix.go:54] fixHost starting: 
	I0924 12:06:00.299175    3910 fix.go:112] recreateIfNeeded on multinode-504000: state=Stopped err=<nil>
	W0924 12:06:00.299201    3910 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:06:00.303601    3910 out.go:177] * Restarting existing qemu2 VM for "multinode-504000" ...
	I0924 12:06:00.307570    3910 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:06:00.307781    3910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:4c:d7:1d:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2
	I0924 12:06:00.316528    3910 main.go:141] libmachine: STDOUT: 
	I0924 12:06:00.316582    3910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:06:00.316653    3910 fix.go:56] duration metric: took 18.218042ms for fixHost
	I0924 12:06:00.316674    3910 start.go:83] releasing machines lock for "multinode-504000", held for 18.367708ms
	W0924 12:06:00.316843    3910 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:06:00.322559    3910 out.go:201] 
	W0924 12:06:00.326684    3910 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:06:00.326710    3910 out.go:270] * 
	* 
	W0924 12:06:00.329461    3910 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:06:00.337495    3910 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-504000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-504000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (32.951333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 node delete m03: exit status 83 (40.729375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-504000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-504000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-504000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status --alsologtostderr: exit status 7 (30.667625ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:06:00.524836    3924 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:06:00.524996    3924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:06:00.525002    3924 out.go:358] Setting ErrFile to fd 2...
	I0924 12:06:00.525005    3924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:06:00.525137    3924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:06:00.525261    3924 out.go:352] Setting JSON to false
	I0924 12:06:00.525273    3924 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:06:00.525339    3924 notify.go:220] Checking for updates...
	I0924 12:06:00.525489    3924 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:06:00.525498    3924 status.go:174] checking status of multinode-504000 ...
	I0924 12:06:00.525732    3924 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:06:00.525736    3924 status.go:377] host is not running, skipping remaining checks
	I0924 12:06:00.525738    3924 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-504000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (30.353084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-504000 stop: (3.231869583s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status: exit status 7 (63.143833ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-504000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-504000 status --alsologtostderr: exit status 7 (32.982917ms)

                                                
                                                
-- stdout --
	multinode-504000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:06:03.883740    3948 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:06:03.883896    3948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:06:03.883900    3948 out.go:358] Setting ErrFile to fd 2...
	I0924 12:06:03.883902    3948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:06:03.884031    3948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:06:03.884173    3948 out.go:352] Setting JSON to false
	I0924 12:06:03.884184    3948 mustload.go:65] Loading cluster: multinode-504000
	I0924 12:06:03.884237    3948 notify.go:220] Checking for updates...
	I0924 12:06:03.884384    3948 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:06:03.884392    3948 status.go:174] checking status of multinode-504000 ...
	I0924 12:06:03.884618    3948 status.go:364] multinode-504000 host status = "Stopped" (err=<nil>)
	I0924 12:06:03.884623    3948 status.go:377] host is not running, skipping remaining checks
	I0924 12:06:03.884625    3948 status.go:176] multinode-504000 status: &{Name:multinode-504000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-504000 status --alsologtostderr": multinode-504000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-504000 status --alsologtostderr": multinode-504000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (30.8215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-504000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-504000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18395025s)

                                                
                                                
-- stdout --
	* [multinode-504000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-504000" primary control-plane node in "multinode-504000" cluster
	* Restarting existing qemu2 VM for "multinode-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-504000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:06:03.944025    3952 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:06:03.944160    3952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:06:03.944164    3952 out.go:358] Setting ErrFile to fd 2...
	I0924 12:06:03.944166    3952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:06:03.944298    3952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:06:03.945302    3952 out.go:352] Setting JSON to false
	I0924 12:06:03.961206    3952 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3934,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:06:03.961325    3952 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:06:03.966327    3952 out.go:177] * [multinode-504000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:06:03.973246    3952 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:06:03.973299    3952 notify.go:220] Checking for updates...
	I0924 12:06:03.980207    3952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:06:03.983250    3952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:06:03.986281    3952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:06:03.989186    3952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:06:03.992198    3952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:06:03.995591    3952 config.go:182] Loaded profile config "multinode-504000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:06:03.995867    3952 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:06:04.000141    3952 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:06:04.007185    3952 start.go:297] selected driver: qemu2
	I0924 12:06:04.007190    3952 start.go:901] validating driver "qemu2" against &{Name:multinode-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-504000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:06:04.007240    3952 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:06:04.009403    3952 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:06:04.009432    3952 cni.go:84] Creating CNI manager for ""
	I0924 12:06:04.009451    3952 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0924 12:06:04.009492    3952 start.go:340] cluster config:
	{Name:multinode-504000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-504000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:06:04.012834    3952 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:06:04.020140    3952 out.go:177] * Starting "multinode-504000" primary control-plane node in "multinode-504000" cluster
	I0924 12:06:04.024238    3952 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:06:04.024254    3952 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:06:04.024266    3952 cache.go:56] Caching tarball of preloaded images
	I0924 12:06:04.024320    3952 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:06:04.024326    3952 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:06:04.024388    3952 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/multinode-504000/config.json ...
	I0924 12:06:04.024844    3952 start.go:360] acquireMachinesLock for multinode-504000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:06:04.024873    3952 start.go:364] duration metric: took 23.958µs to acquireMachinesLock for "multinode-504000"
	I0924 12:06:04.024883    3952 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:06:04.024889    3952 fix.go:54] fixHost starting: 
	I0924 12:06:04.025003    3952 fix.go:112] recreateIfNeeded on multinode-504000: state=Stopped err=<nil>
	W0924 12:06:04.025012    3952 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:06:04.032293    3952 out.go:177] * Restarting existing qemu2 VM for "multinode-504000" ...
	I0924 12:06:04.036202    3952 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:06:04.036253    3952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:4c:d7:1d:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2
	I0924 12:06:04.038167    3952 main.go:141] libmachine: STDOUT: 
	I0924 12:06:04.038186    3952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:06:04.038218    3952 fix.go:56] duration metric: took 13.327541ms for fixHost
	I0924 12:06:04.038222    3952 start.go:83] releasing machines lock for "multinode-504000", held for 13.344208ms
	W0924 12:06:04.038229    3952 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:06:04.038270    3952 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:06:04.038275    3952 start.go:729] Will try again in 5 seconds ...
	I0924 12:06:09.040562    3952 start.go:360] acquireMachinesLock for multinode-504000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:06:09.041016    3952 start.go:364] duration metric: took 337µs to acquireMachinesLock for "multinode-504000"
	I0924 12:06:09.041125    3952 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:06:09.041144    3952 fix.go:54] fixHost starting: 
	I0924 12:06:09.041900    3952 fix.go:112] recreateIfNeeded on multinode-504000: state=Stopped err=<nil>
	W0924 12:06:09.041935    3952 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:06:09.046525    3952 out.go:177] * Restarting existing qemu2 VM for "multinode-504000" ...
	I0924 12:06:09.055489    3952 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:06:09.055730    3952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f3:4c:d7:1d:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/multinode-504000/disk.qcow2
	I0924 12:06:09.064895    3952 main.go:141] libmachine: STDOUT: 
	I0924 12:06:09.064997    3952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:06:09.065102    3952 fix.go:56] duration metric: took 23.956792ms for fixHost
	I0924 12:06:09.065125    3952 start.go:83] releasing machines lock for "multinode-504000", held for 24.085625ms
	W0924 12:06:09.065439    3952 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-504000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:06:09.073361    3952 out.go:201] 
	W0924 12:06:09.077340    3952 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:06:09.077364    3952 out.go:270] * 
	* 
	W0924 12:06:09.079721    3952 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:06:09.087304    3952 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-504000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (69.0315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-504000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-504000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-504000-m01 --driver=qemu2 : exit status 80 (10.124338875s)

                                                
                                                
-- stdout --
	* [multinode-504000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-504000-m01" primary control-plane node in "multinode-504000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-504000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-504000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-504000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-504000-m02 --driver=qemu2 : exit status 80 (9.977469542s)

                                                
                                                
-- stdout --
	* [multinode-504000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-504000-m02" primary control-plane node in "multinode-504000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-504000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-504000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-504000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-504000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-504000: exit status 83 (80.416542ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-504000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-504000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-504000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-504000 -n multinode-504000: exit status 7 (30.457084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-504000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.33s)

                                                
                                    
x
+
TestPreload (10.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-276000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-276000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.946203791s)

                                                
                                                
-- stdout --
	* [test-preload-276000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-276000" primary control-plane node in "test-preload-276000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-276000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:06:29.646776    4004 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:06:29.646910    4004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:06:29.646914    4004 out.go:358] Setting ErrFile to fd 2...
	I0924 12:06:29.646916    4004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:06:29.647065    4004 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:06:29.648190    4004 out.go:352] Setting JSON to false
	I0924 12:06:29.664362    4004 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3960,"bootTime":1727200829,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:06:29.664426    4004 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:06:29.671267    4004 out.go:177] * [test-preload-276000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:06:29.679181    4004 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:06:29.679213    4004 notify.go:220] Checking for updates...
	I0924 12:06:29.684725    4004 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:06:29.688135    4004 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:06:29.691198    4004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:06:29.694190    4004 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:06:29.697216    4004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:06:29.700601    4004 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:06:29.700649    4004 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:06:29.705156    4004 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:06:29.712172    4004 start.go:297] selected driver: qemu2
	I0924 12:06:29.712179    4004 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:06:29.712187    4004 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:06:29.714619    4004 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:06:29.717198    4004 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:06:29.720226    4004 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:06:29.720244    4004 cni.go:84] Creating CNI manager for ""
	I0924 12:06:29.720265    4004 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:06:29.720269    4004 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:06:29.720298    4004 start.go:340] cluster config:
	{Name:test-preload-276000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-276000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:06:29.723991    4004 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:06:29.731063    4004 out.go:177] * Starting "test-preload-276000" primary control-plane node in "test-preload-276000" cluster
	I0924 12:06:29.735191    4004 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0924 12:06:29.735310    4004 cache.go:107] acquiring lock: {Name:mk945321c85c08e9c9840e1e707ca00e831c4213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:06:29.735321    4004 cache.go:107] acquiring lock: {Name:mkfc06ce9b5b8c6b87e005fade63a43670170a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:06:29.735316    4004 cache.go:107] acquiring lock: {Name:mke2246282d3d67619a5ad2b3d0aa61ca0b675cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:06:29.735367    4004 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/test-preload-276000/config.json ...
	I0924 12:06:29.735388    4004 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/test-preload-276000/config.json: {Name:mk1b7082b6662db17fc530bf433664e773a20a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:06:29.735506    4004 cache.go:107] acquiring lock: {Name:mk8800cf48061f0d06085b18871e8dca01aac41b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:06:29.735586    4004 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0924 12:06:29.735597    4004 cache.go:107] acquiring lock: {Name:mk03e4ce2a67180eb4781cd49a74582a936e3f94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:06:29.735598    4004 cache.go:107] acquiring lock: {Name:mk5fe13025c01c085e0de9bc9f8eb11e86edd808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:06:29.735626    4004 cache.go:107] acquiring lock: {Name:mkc964d1e1629be49d61da4663761f0fd94ef653 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:06:29.735642    4004 cache.go:107] acquiring lock: {Name:mkbbf8b39edf05627a5bfd12fcd8bf77d783c0f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:06:29.735587    4004 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0924 12:06:29.735785    4004 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:06:29.735793    4004 start.go:360] acquireMachinesLock for test-preload-276000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:06:29.735852    4004 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 12:06:29.735857    4004 start.go:364] duration metric: took 58.791µs to acquireMachinesLock for "test-preload-276000"
	I0924 12:06:29.735888    4004 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:06:29.735872    4004 start.go:93] Provisioning new machine with config: &{Name:test-preload-276000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-276000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:06:29.735926    4004 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:06:29.735952    4004 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:06:29.735985    4004 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0924 12:06:29.735963    4004 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0924 12:06:29.743147    4004 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:06:29.748409    4004 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0924 12:06:29.749156    4004 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0924 12:06:29.749250    4004 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0924 12:06:29.749392    4004 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0924 12:06:29.751155    4004 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:06:29.751159    4004 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:06:29.751193    4004 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:06:29.751238    4004 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0924 12:06:29.761929    4004 start.go:159] libmachine.API.Create for "test-preload-276000" (driver="qemu2")
	I0924 12:06:29.761956    4004 client.go:168] LocalClient.Create starting
	I0924 12:06:29.762040    4004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:06:29.762072    4004 main.go:141] libmachine: Decoding PEM data...
	I0924 12:06:29.762081    4004 main.go:141] libmachine: Parsing certificate...
	I0924 12:06:29.762122    4004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:06:29.762147    4004 main.go:141] libmachine: Decoding PEM data...
	I0924 12:06:29.762157    4004 main.go:141] libmachine: Parsing certificate...
	I0924 12:06:29.762505    4004 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:06:29.923171    4004 main.go:141] libmachine: Creating SSH key...
	I0924 12:06:29.974121    4004 main.go:141] libmachine: Creating Disk image...
	I0924 12:06:29.974141    4004 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:06:29.974341    4004 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2
	I0924 12:06:29.984296    4004 main.go:141] libmachine: STDOUT: 
	I0924 12:06:29.984312    4004 main.go:141] libmachine: STDERR: 
	I0924 12:06:29.984370    4004 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2 +20000M
	I0924 12:06:29.992803    4004 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:06:29.992823    4004 main.go:141] libmachine: STDERR: 
	I0924 12:06:29.992848    4004 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2
	I0924 12:06:29.992853    4004 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:06:29.992870    4004 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:06:29.992896    4004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:06:f8:1b:3e:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2
	I0924 12:06:29.995048    4004 main.go:141] libmachine: STDOUT: 
	I0924 12:06:29.995068    4004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:06:29.995096    4004 client.go:171] duration metric: took 233.13375ms to LocalClient.Create
	I0924 12:06:30.255586    4004 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0924 12:06:30.256420    4004 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0924 12:06:30.267927    4004 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0924 12:06:30.300372    4004 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0924 12:06:30.300407    4004 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0924 12:06:30.310930    4004 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0924 12:06:30.314598    4004 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0924 12:06:30.356291    4004 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0924 12:06:30.443692    4004 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0924 12:06:30.443739    4004 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 708.151917ms
	I0924 12:06:30.443783    4004 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0924 12:06:30.809723    4004 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0924 12:06:30.809824    4004 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 12:06:31.356460    4004 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0924 12:06:31.356543    4004 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.621234666s
	I0924 12:06:31.356574    4004 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0924 12:06:31.774017    4004 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0924 12:06:31.774064    4004 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.038499042s
	I0924 12:06:31.774109    4004 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0924 12:06:31.995372    4004 start.go:128] duration metric: took 2.259423416s to createHost
	I0924 12:06:31.995430    4004 start.go:83] releasing machines lock for "test-preload-276000", held for 2.25957325s
	W0924 12:06:31.995490    4004 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:06:32.012737    4004 out.go:177] * Deleting "test-preload-276000" in qemu2 ...
	W0924 12:06:32.047989    4004 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:06:32.048012    4004 start.go:729] Will try again in 5 seconds ...
	I0924 12:06:32.734421    4004 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0924 12:06:32.734495    4004 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.998907833s
	I0924 12:06:32.734519    4004 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0924 12:06:34.190514    4004 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0924 12:06:34.190565    4004 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.45528025s
	I0924 12:06:34.190589    4004 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0924 12:06:34.654326    4004 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0924 12:06:34.654389    4004 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.9189565s
	I0924 12:06:34.654416    4004 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0924 12:06:35.105520    4004 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0924 12:06:35.105576    4004 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.37028625s
	I0924 12:06:35.105608    4004 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0924 12:06:37.048625    4004 start.go:360] acquireMachinesLock for test-preload-276000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:06:37.049007    4004 start.go:364] duration metric: took 300.958µs to acquireMachinesLock for "test-preload-276000"
	I0924 12:06:37.049134    4004 start.go:93] Provisioning new machine with config: &{Name:test-preload-276000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-276000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:06:37.049343    4004 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:06:37.056964    4004 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:06:37.105846    4004 start.go:159] libmachine.API.Create for "test-preload-276000" (driver="qemu2")
	I0924 12:06:37.105889    4004 client.go:168] LocalClient.Create starting
	I0924 12:06:37.106034    4004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:06:37.106099    4004 main.go:141] libmachine: Decoding PEM data...
	I0924 12:06:37.106117    4004 main.go:141] libmachine: Parsing certificate...
	I0924 12:06:37.106199    4004 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:06:37.106245    4004 main.go:141] libmachine: Decoding PEM data...
	I0924 12:06:37.106263    4004 main.go:141] libmachine: Parsing certificate...
	I0924 12:06:37.106841    4004 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:06:37.277813    4004 main.go:141] libmachine: Creating SSH key...
	I0924 12:06:37.495110    4004 main.go:141] libmachine: Creating Disk image...
	I0924 12:06:37.495118    4004 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:06:37.495291    4004 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2
	I0924 12:06:37.504980    4004 main.go:141] libmachine: STDOUT: 
	I0924 12:06:37.505020    4004 main.go:141] libmachine: STDERR: 
	I0924 12:06:37.505096    4004 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2 +20000M
	I0924 12:06:37.513185    4004 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:06:37.513201    4004 main.go:141] libmachine: STDERR: 
	I0924 12:06:37.513214    4004 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2
	I0924 12:06:37.513223    4004 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:06:37.513232    4004 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:06:37.513272    4004 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:54:e5:5d:3a:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/test-preload-276000/disk.qcow2
	I0924 12:06:37.514907    4004 main.go:141] libmachine: STDOUT: 
	I0924 12:06:37.514919    4004 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:06:37.514935    4004 client.go:171] duration metric: took 409.043125ms to LocalClient.Create
	I0924 12:06:37.749908    4004 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0924 12:06:37.749967    4004 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.014521042s
	I0924 12:06:37.749993    4004 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0924 12:06:37.750032    4004 cache.go:87] Successfully saved all images to host disk.
	I0924 12:06:39.516532    4004 start.go:128] duration metric: took 2.467169167s to createHost
	I0924 12:06:39.516591    4004 start.go:83] releasing machines lock for "test-preload-276000", held for 2.467573709s
	W0924 12:06:39.516886    4004 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-276000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-276000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:06:39.532566    4004 out.go:201] 
	W0924 12:06:39.536665    4004 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:06:39.536699    4004 out.go:270] * 
	* 
	W0924 12:06:39.539549    4004 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:06:39.549553    4004 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-276000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-24 12:06:39.566952 -0700 PDT m=+2875.502388084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-276000 -n test-preload-276000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-276000 -n test-preload-276000: exit status 7 (69.0885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-276000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-276000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-276000
--- FAIL: TestPreload (10.10s)

                                                
                                    
x
+
TestScheduledStopUnix (10.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-694000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-694000 --memory=2048 --driver=qemu2 : exit status 80 (9.915401291s)

                                                
                                                
-- stdout --
	* [scheduled-stop-694000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-694000" primary control-plane node in "scheduled-stop-694000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-694000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-694000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-694000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-694000" primary control-plane node in "scheduled-stop-694000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-694000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-694000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-24 12:06:49.631811 -0700 PDT m=+2885.567298542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-694000 -n scheduled-stop-694000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-694000 -n scheduled-stop-694000: exit status 7 (68.125875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-694000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-694000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-694000
--- FAIL: TestScheduledStopUnix (10.07s)

                                                
                                    
x
+
TestSkaffold (13.06s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe44844588 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe44844588 version: (1.070850917s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-037000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-037000 --memory=2600 --driver=qemu2 : exit status 80 (10.08165975s)

                                                
                                                
-- stdout --
	* [skaffold-037000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-037000" primary control-plane node in "skaffold-037000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-037000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-037000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-037000" primary control-plane node in "skaffold-037000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-037000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-24 12:07:02.694099 -0700 PDT m=+2898.629653042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-037000 -n skaffold-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-037000 -n skaffold-037000: exit status 7 (61.940916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-037000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-037000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-037000
--- FAIL: TestSkaffold (13.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (598.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3638484505 start -p running-upgrade-070000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3638484505 start -p running-upgrade-070000 --memory=2200 --vm-driver=qemu2 : (59.151292291s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-070000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0924 12:10:34.242829    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-070000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.982486792s)

                                                
                                                
-- stdout --
	* [running-upgrade-070000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-070000" primary control-plane node in "running-upgrade-070000" cluster
	* Updating the running qemu2 "running-upgrade-070000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:08:45.810044    4385 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:08:45.810200    4385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:08:45.810203    4385 out.go:358] Setting ErrFile to fd 2...
	I0924 12:08:45.810205    4385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:08:45.810327    4385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:08:45.811476    4385 out.go:352] Setting JSON to false
	I0924 12:08:45.828947    4385 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4096,"bootTime":1727200829,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:08:45.829022    4385 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:08:45.833933    4385 out.go:177] * [running-upgrade-070000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:08:45.841876    4385 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:08:45.841894    4385 notify.go:220] Checking for updates...
	I0924 12:08:45.851844    4385 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:08:45.858899    4385 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:08:45.861781    4385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:08:45.865856    4385 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:08:45.869912    4385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:08:45.873163    4385 config.go:182] Loaded profile config "running-upgrade-070000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:08:45.876859    4385 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 12:08:45.879933    4385 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:08:45.883873    4385 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:08:45.890920    4385 start.go:297] selected driver: qemu2
	I0924 12:08:45.890927    4385 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0924 12:08:45.890988    4385 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:08:45.893316    4385 cni.go:84] Creating CNI manager for ""
	I0924 12:08:45.893354    4385 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:08:45.893378    4385 start.go:340] cluster config:
	{Name:running-upgrade-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0924 12:08:45.893431    4385 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:08:45.900855    4385 out.go:177] * Starting "running-upgrade-070000" primary control-plane node in "running-upgrade-070000" cluster
	I0924 12:08:45.904882    4385 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0924 12:08:45.904898    4385 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0924 12:08:45.904909    4385 cache.go:56] Caching tarball of preloaded images
	I0924 12:08:45.904966    4385 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:08:45.904971    4385 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0924 12:08:45.905026    4385 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/config.json ...
	I0924 12:08:45.905453    4385 start.go:360] acquireMachinesLock for running-upgrade-070000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:08:45.905479    4385 start.go:364] duration metric: took 21µs to acquireMachinesLock for "running-upgrade-070000"
	I0924 12:08:45.905488    4385 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:08:45.905494    4385 fix.go:54] fixHost starting: 
	I0924 12:08:45.906118    4385 fix.go:112] recreateIfNeeded on running-upgrade-070000: state=Running err=<nil>
	W0924 12:08:45.906128    4385 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:08:45.909878    4385 out.go:177] * Updating the running qemu2 "running-upgrade-070000" VM ...
	I0924 12:08:45.917826    4385 machine.go:93] provisionDockerMachine start ...
	I0924 12:08:45.917876    4385 main.go:141] libmachine: Using SSH client type: native
	I0924 12:08:45.917990    4385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c31c00] 0x102c34440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0924 12:08:45.917995    4385 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 12:08:45.990823    4385 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-070000
	
	I0924 12:08:45.990840    4385 buildroot.go:166] provisioning hostname "running-upgrade-070000"
	I0924 12:08:45.990889    4385 main.go:141] libmachine: Using SSH client type: native
	I0924 12:08:45.991019    4385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c31c00] 0x102c34440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0924 12:08:45.991025    4385 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-070000 && echo "running-upgrade-070000" | sudo tee /etc/hostname
	I0924 12:08:46.065397    4385 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-070000
	
	I0924 12:08:46.065453    4385 main.go:141] libmachine: Using SSH client type: native
	I0924 12:08:46.065573    4385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c31c00] 0x102c34440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0924 12:08:46.065581    4385 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-070000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-070000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-070000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 12:08:46.138988    4385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 12:08:46.139001    4385 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19700-1081/.minikube CaCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19700-1081/.minikube}
	I0924 12:08:46.139009    4385 buildroot.go:174] setting up certificates
	I0924 12:08:46.139014    4385 provision.go:84] configureAuth start
	I0924 12:08:46.139021    4385 provision.go:143] copyHostCerts
	I0924 12:08:46.139085    4385 exec_runner.go:144] found /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.pem, removing ...
	I0924 12:08:46.139091    4385 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.pem
	I0924 12:08:46.139217    4385 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.pem (1078 bytes)
	I0924 12:08:46.139396    4385 exec_runner.go:144] found /Users/jenkins/minikube-integration/19700-1081/.minikube/cert.pem, removing ...
	I0924 12:08:46.139400    4385 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19700-1081/.minikube/cert.pem
	I0924 12:08:46.139442    4385 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/cert.pem (1123 bytes)
	I0924 12:08:46.139543    4385 exec_runner.go:144] found /Users/jenkins/minikube-integration/19700-1081/.minikube/key.pem, removing ...
	I0924 12:08:46.139546    4385 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19700-1081/.minikube/key.pem
	I0924 12:08:46.139587    4385 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/key.pem (1675 bytes)
	I0924 12:08:46.139665    4385 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-070000 san=[127.0.0.1 localhost minikube running-upgrade-070000]
	I0924 12:08:46.209324    4385 provision.go:177] copyRemoteCerts
	I0924 12:08:46.209365    4385 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 12:08:46.209373    4385 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/running-upgrade-070000/id_rsa Username:docker}
	I0924 12:08:46.247616    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 12:08:46.254327    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 12:08:46.261733    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 12:08:46.269829    4385 provision.go:87] duration metric: took 130.806625ms to configureAuth
	I0924 12:08:46.269838    4385 buildroot.go:189] setting minikube options for container-runtime
	I0924 12:08:46.269944    4385 config.go:182] Loaded profile config "running-upgrade-070000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:08:46.269981    4385 main.go:141] libmachine: Using SSH client type: native
	I0924 12:08:46.270065    4385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c31c00] 0x102c34440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0924 12:08:46.270070    4385 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0924 12:08:46.341013    4385 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0924 12:08:46.341022    4385 buildroot.go:70] root file system type: tmpfs
	I0924 12:08:46.341078    4385 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0924 12:08:46.341127    4385 main.go:141] libmachine: Using SSH client type: native
	I0924 12:08:46.341231    4385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c31c00] 0x102c34440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0924 12:08:46.341264    4385 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0924 12:08:46.419463    4385 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0924 12:08:46.419523    4385 main.go:141] libmachine: Using SSH client type: native
	I0924 12:08:46.419642    4385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c31c00] 0x102c34440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0924 12:08:46.419654    4385 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0924 12:08:46.494128    4385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 12:08:46.494140    4385 machine.go:96] duration metric: took 576.310458ms to provisionDockerMachine
	I0924 12:08:46.494146    4385 start.go:293] postStartSetup for "running-upgrade-070000" (driver="qemu2")
	I0924 12:08:46.494152    4385 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 12:08:46.494219    4385 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 12:08:46.494228    4385 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/running-upgrade-070000/id_rsa Username:docker}
	I0924 12:08:46.532480    4385 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 12:08:46.534889    4385 info.go:137] Remote host: Buildroot 2021.02.12
	I0924 12:08:46.534897    4385 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19700-1081/.minikube/addons for local assets ...
	I0924 12:08:46.534977    4385 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19700-1081/.minikube/files for local assets ...
	I0924 12:08:46.535076    4385 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem -> 15982.pem in /etc/ssl/certs
	I0924 12:08:46.535177    4385 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 12:08:46.537871    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem --> /etc/ssl/certs/15982.pem (1708 bytes)
	I0924 12:08:46.545150    4385 start.go:296] duration metric: took 51.000209ms for postStartSetup
	I0924 12:08:46.545162    4385 fix.go:56] duration metric: took 639.673958ms for fixHost
	I0924 12:08:46.545196    4385 main.go:141] libmachine: Using SSH client type: native
	I0924 12:08:46.545295    4385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c31c00] 0x102c34440 <nil>  [] 0s} localhost 50253 <nil> <nil>}
	I0924 12:08:46.545300    4385 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 12:08:46.615738    4385 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727204926.984174970
	
	I0924 12:08:46.615748    4385 fix.go:216] guest clock: 1727204926.984174970
	I0924 12:08:46.615751    4385 fix.go:229] Guest: 2024-09-24 12:08:46.98417497 -0700 PDT Remote: 2024-09-24 12:08:46.545164 -0700 PDT m=+0.754876168 (delta=439.01097ms)
	I0924 12:08:46.615764    4385 fix.go:200] guest clock delta is within tolerance: 439.01097ms
	I0924 12:08:46.615768    4385 start.go:83] releasing machines lock for "running-upgrade-070000", held for 710.288125ms
	I0924 12:08:46.615834    4385 ssh_runner.go:195] Run: cat /version.json
	I0924 12:08:46.615844    4385 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/running-upgrade-070000/id_rsa Username:docker}
	I0924 12:08:46.615834    4385 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 12:08:46.615913    4385 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/running-upgrade-070000/id_rsa Username:docker}
	W0924 12:08:46.616404    4385 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50253: connect: connection refused
	I0924 12:08:46.616427    4385 retry.go:31] will retry after 232.454505ms: dial tcp [::1]:50253: connect: connection refused
	W0924 12:08:46.895406    4385 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0924 12:08:46.895485    4385 ssh_runner.go:195] Run: systemctl --version
	I0924 12:08:46.898068    4385 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 12:08:46.900592    4385 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 12:08:46.900638    4385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0924 12:08:46.904911    4385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0924 12:08:46.910870    4385 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 12:08:46.910878    4385 start.go:495] detecting cgroup driver to use...
	I0924 12:08:46.910957    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 12:08:46.917182    4385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0924 12:08:46.921056    4385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0924 12:08:46.924624    4385 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0924 12:08:46.924656    4385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0924 12:08:46.927708    4385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 12:08:46.930531    4385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0924 12:08:46.935036    4385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 12:08:46.937965    4385 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 12:08:46.941501    4385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0924 12:08:46.944753    4385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0924 12:08:46.947824    4385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0924 12:08:46.950599    4385 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 12:08:46.953625    4385 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 12:08:46.956335    4385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:08:47.054360    4385 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0924 12:08:47.067139    4385 start.go:495] detecting cgroup driver to use...
	I0924 12:08:47.067207    4385 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0924 12:08:47.072698    4385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 12:08:47.077408    4385 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 12:08:47.083225    4385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 12:08:47.087667    4385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 12:08:47.092437    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 12:08:47.097782    4385 ssh_runner.go:195] Run: which cri-dockerd
	I0924 12:08:47.099279    4385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0924 12:08:47.101877    4385 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0924 12:08:47.106963    4385 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0924 12:08:47.206542    4385 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0924 12:08:47.303661    4385 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0924 12:08:47.303724    4385 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0924 12:08:47.309216    4385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:08:47.394351    4385 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0924 12:08:50.841065    4385 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.446716042s)
	I0924 12:08:50.841141    4385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0924 12:08:50.846475    4385 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0924 12:08:50.852708    4385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 12:08:50.857734    4385 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0924 12:08:50.953453    4385 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0924 12:08:51.035497    4385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:08:51.099254    4385 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0924 12:08:51.105356    4385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 12:08:51.109969    4385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:08:51.192130    4385 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0924 12:08:51.231057    4385 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0924 12:08:51.231152    4385 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0924 12:08:51.233121    4385 start.go:563] Will wait 60s for crictl version
	I0924 12:08:51.233174    4385 ssh_runner.go:195] Run: which crictl
	I0924 12:08:51.234493    4385 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 12:08:51.247638    4385 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0924 12:08:51.247715    4385 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0924 12:08:51.267749    4385 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0924 12:08:51.283669    4385 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0924 12:08:51.283757    4385 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0924 12:08:51.285064    4385 kubeadm.go:883] updating cluster {Name:running-upgrade-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0924 12:08:51.285105    4385 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0924 12:08:51.285151    4385 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0924 12:08:51.294940    4385 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0924 12:08:51.294952    4385 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0924 12:08:51.295002    4385 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0924 12:08:51.298116    4385 ssh_runner.go:195] Run: which lz4
	I0924 12:08:51.299435    4385 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 12:08:51.300681    4385 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 12:08:51.300691    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0924 12:08:52.164257    4385 docker.go:649] duration metric: took 864.865584ms to copy over tarball
	I0924 12:08:52.164338    4385 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 12:08:53.253863    4385 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.089516958s)
	I0924 12:08:53.253879    4385 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 12:08:53.270203    4385 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0924 12:08:53.273687    4385 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0924 12:08:53.278739    4385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:08:53.342862    4385 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0924 12:08:54.529984    4385 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.187112416s)
	I0924 12:08:54.530098    4385 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0924 12:08:54.541432    4385 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0924 12:08:54.541441    4385 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0924 12:08:54.541447    4385 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 12:08:54.545082    4385 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:08:54.547013    4385 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:08:54.548718    4385 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:08:54.548926    4385 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0924 12:08:54.551075    4385 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:08:54.551289    4385 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:08:54.552610    4385 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:08:54.553967    4385 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0924 12:08:54.554291    4385 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:08:54.555092    4385 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:08:54.555165    4385 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:08:54.555894    4385 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:08:54.557019    4385 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:08:54.557017    4385 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:08:54.557920    4385 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:08:54.558544    4385 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:08:54.943709    4385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0924 12:08:54.956950    4385 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0924 12:08:54.956978    4385 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0924 12:08:54.957052    4385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0924 12:08:54.964869    4385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:08:54.968003    4385 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0924 12:08:54.968137    4385 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0924 12:08:54.970856    4385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0924 12:08:54.980092    4385 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0924 12:08:54.980092    4385 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0924 12:08:54.980117    4385 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:08:54.980123    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0924 12:08:54.980172    4385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0924 12:08:54.984847    4385 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0924 12:08:54.984991    4385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:08:54.989208    4385 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0924 12:08:54.989228    4385 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:08:54.989280    4385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0924 12:08:54.991583    4385 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0924 12:08:54.991592    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0924 12:08:54.999582    4385 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0924 12:08:55.003182    4385 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0924 12:08:55.003201    4385 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:08:55.003270    4385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:08:55.015207    4385 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0924 12:08:55.015344    4385 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0924 12:08:55.041797    4385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:08:55.043709    4385 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0924 12:08:55.043737    4385 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0924 12:08:55.043743    4385 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0924 12:08:55.043754    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0924 12:08:55.043840    4385 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0924 12:08:55.047082    4385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:08:55.059190    4385 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0924 12:08:55.059215    4385 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:08:55.059237    4385 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0924 12:08:55.059254    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0924 12:08:55.059281    4385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:08:55.065934    4385 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0924 12:08:55.065958    4385 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:08:55.066031    4385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:08:55.086383    4385 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0924 12:08:55.091321    4385 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0924 12:08:55.101454    4385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:08:55.152571    4385 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0924 12:08:55.152593    4385 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:08:55.152658    4385 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:08:55.155690    4385 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0924 12:08:55.155697    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0924 12:08:55.184340    4385 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0924 12:08:55.249296    4385 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0924 12:08:55.369690    4385 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0924 12:08:55.369704    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0924 12:08:55.408584    4385 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0924 12:08:55.408703    4385 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:08:55.528861    4385 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0924 12:08:55.528882    4385 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0924 12:08:55.528900    4385 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:08:55.528972    4385 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:08:56.566811    4385 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.037825791s)
	I0924 12:08:56.566835    4385 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 12:08:56.567070    4385 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 12:08:56.570847    4385 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0924 12:08:56.570874    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0924 12:08:56.621355    4385 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 12:08:56.621376    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0924 12:08:56.850788    4385 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 12:08:56.850830    4385 cache_images.go:92] duration metric: took 2.309386791s to LoadCachedImages
	W0924 12:08:56.850885    4385 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0924 12:08:56.850891    4385 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0924 12:08:56.850940    4385 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-070000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 12:08:56.851018    4385 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0924 12:08:56.864127    4385 cni.go:84] Creating CNI manager for ""
	I0924 12:08:56.864139    4385 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:08:56.864144    4385 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 12:08:56.864152    4385 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-070000 NodeName:running-upgrade-070000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 12:08:56.864222    4385 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-070000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 12:08:56.864285    4385 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0924 12:08:56.867324    4385 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 12:08:56.867356    4385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 12:08:56.870053    4385 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0924 12:08:56.875129    4385 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 12:08:56.880022    4385 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0924 12:08:56.886017    4385 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0924 12:08:56.887512    4385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:08:56.966709    4385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 12:08:56.971788    4385 certs.go:68] Setting up /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000 for IP: 10.0.2.15
	I0924 12:08:56.971798    4385 certs.go:194] generating shared ca certs ...
	I0924 12:08:56.971807    4385 certs.go:226] acquiring lock for ca certs: {Name:mk724855f1a91a4bb17b52053043bbe8bd1cc119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:08:56.971956    4385 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.key
	I0924 12:08:56.971990    4385 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.key
	I0924 12:08:56.971999    4385 certs.go:256] generating profile certs ...
	I0924 12:08:56.972064    4385 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/client.key
	I0924 12:08:56.972079    4385 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.key.784d826e
	I0924 12:08:56.972091    4385 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.crt.784d826e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0924 12:08:57.066060    4385 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.crt.784d826e ...
	I0924 12:08:57.066065    4385 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.crt.784d826e: {Name:mk758de956c8379686858f086e231ec1f0ac55f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:08:57.066403    4385 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.key.784d826e ...
	I0924 12:08:57.066410    4385 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.key.784d826e: {Name:mk656aa4735702d4a1138f7ab753199381d17bfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:08:57.066566    4385 certs.go:381] copying /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.crt.784d826e -> /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.crt
	I0924 12:08:57.066700    4385 certs.go:385] copying /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.key.784d826e -> /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.key
	I0924 12:08:57.066828    4385 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/proxy-client.key
	I0924 12:08:57.066950    4385 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/1598.pem (1338 bytes)
	W0924 12:08:57.066972    4385 certs.go:480] ignoring /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/1598_empty.pem, impossibly tiny 0 bytes
	I0924 12:08:57.066977    4385 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 12:08:57.066996    4385 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem (1078 bytes)
	I0924 12:08:57.067015    4385 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem (1123 bytes)
	I0924 12:08:57.067034    4385 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem (1675 bytes)
	I0924 12:08:57.067071    4385 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem (1708 bytes)
	I0924 12:08:57.067395    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 12:08:57.074589    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 12:08:57.081953    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 12:08:57.089585    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 12:08:57.097239    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 12:08:57.104419    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 12:08:57.111257    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 12:08:57.117933    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 12:08:57.125491    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem --> /usr/share/ca-certificates/15982.pem (1708 bytes)
	I0924 12:08:57.132742    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 12:08:57.140125    4385 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/1598.pem --> /usr/share/ca-certificates/1598.pem (1338 bytes)
	I0924 12:08:57.147187    4385 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 12:08:57.152010    4385 ssh_runner.go:195] Run: openssl version
	I0924 12:08:57.153867    4385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15982.pem && ln -fs /usr/share/ca-certificates/15982.pem /etc/ssl/certs/15982.pem"
	I0924 12:08:57.157340    4385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15982.pem
	I0924 12:08:57.158942    4385 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:35 /usr/share/ca-certificates/15982.pem
	I0924 12:08:57.158970    4385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15982.pem
	I0924 12:08:57.160700    4385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15982.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 12:08:57.163509    4385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 12:08:57.166355    4385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 12:08:57.168106    4385 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:19 /usr/share/ca-certificates/minikubeCA.pem
	I0924 12:08:57.168131    4385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 12:08:57.169862    4385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 12:08:57.172954    4385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1598.pem && ln -fs /usr/share/ca-certificates/1598.pem /etc/ssl/certs/1598.pem"
	I0924 12:08:57.176230    4385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1598.pem
	I0924 12:08:57.177699    4385 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:35 /usr/share/ca-certificates/1598.pem
	I0924 12:08:57.177724    4385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1598.pem
	I0924 12:08:57.179566    4385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1598.pem /etc/ssl/certs/51391683.0"
	I0924 12:08:57.182211    4385 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 12:08:57.183905    4385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 12:08:57.185709    4385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 12:08:57.187836    4385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 12:08:57.189601    4385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 12:08:57.191551    4385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 12:08:57.193255    4385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 12:08:57.195317    4385 kubeadm.go:392] StartCluster: {Name:running-upgrade-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50285 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0924 12:08:57.195386    4385 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0924 12:08:57.205790    4385 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 12:08:57.210021    4385 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 12:08:57.210030    4385 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 12:08:57.210057    4385 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 12:08:57.213421    4385 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 12:08:57.213669    4385 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-070000" does not appear in /Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:08:57.213734    4385 kubeconfig.go:62] /Users/jenkins/minikube-integration/19700-1081/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-070000" cluster setting kubeconfig missing "running-upgrade-070000" context setting]
	I0924 12:08:57.213861    4385 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/kubeconfig: {Name:mk406b8f0f5e016c0aa63af8364801bb91be8bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:08:57.214979    4385 kapi.go:59] client config for running-upgrade-070000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/client.key", CAFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10420a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 12:08:57.215323    4385 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 12:08:57.218205    4385 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-070000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0924 12:08:57.218212    4385 kubeadm.go:1160] stopping kube-system containers ...
	I0924 12:08:57.218260    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0924 12:08:57.229602    4385 docker.go:483] Stopping containers: [24841f18fe00 99f9b01d6324 ccd882e6c66c 9c4f3996e841 39ef84c00e75 1ffeebca7f19 f87cbe4bd802 4aa76c361b77 9130f5815031 32c12eece8b3 ac32f3ea2537 166f51d20ce8 080c8510e65b]
	I0924 12:08:57.229682    4385 ssh_runner.go:195] Run: docker stop 24841f18fe00 99f9b01d6324 ccd882e6c66c 9c4f3996e841 39ef84c00e75 1ffeebca7f19 f87cbe4bd802 4aa76c361b77 9130f5815031 32c12eece8b3 ac32f3ea2537 166f51d20ce8 080c8510e65b
	I0924 12:08:57.240589    4385 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 12:08:57.342071    4385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 12:08:57.346380    4385 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 24 19:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep 24 19:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 24 19:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 24 19:08 /etc/kubernetes/scheduler.conf
	
	I0924 12:08:57.346426    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf
	I0924 12:08:57.349861    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0924 12:08:57.349895    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 12:08:57.353259    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf
	I0924 12:08:57.356544    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0924 12:08:57.356584    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 12:08:57.360254    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf
	I0924 12:08:57.363242    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0924 12:08:57.363266    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 12:08:57.366101    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf
	I0924 12:08:57.369283    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0924 12:08:57.369311    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 12:08:57.372428    4385 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 12:08:57.375401    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:08:57.397476    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:08:58.387470    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:08:58.589245    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:08:58.610581    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:08:58.634773    4385 api_server.go:52] waiting for apiserver process to appear ...
	I0924 12:08:58.634861    4385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:08:59.137300    4385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:08:59.636951    4385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:09:00.137007    4385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:09:00.143490    4385 api_server.go:72] duration metric: took 1.508732s to wait for apiserver process to appear ...
	I0924 12:09:00.143501    4385 api_server.go:88] waiting for apiserver healthz status ...
	I0924 12:09:00.143514    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:05.145625    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:05.145675    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:10.146085    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:10.146139    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:15.146695    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:15.146721    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:20.147340    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:20.147443    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:25.148810    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:25.148910    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:30.150646    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:30.150739    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:35.152807    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:35.152902    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:40.155537    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:40.155640    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:45.158361    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:45.158447    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:50.161116    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:50.161213    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:09:55.163902    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:09:55.163949    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:10:00.166429    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:10:00.167033    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:10:00.209501    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:10:00.209667    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:10:00.231862    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:10:00.231984    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:10:00.250299    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:10:00.250398    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:10:00.262375    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:10:00.262478    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:10:00.273429    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:10:00.273514    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:10:00.284330    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:10:00.284416    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:10:00.294387    4385 logs.go:276] 0 containers: []
	W0924 12:10:00.294400    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:10:00.294464    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:10:00.304474    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:10:00.304489    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:10:00.304494    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:10:00.318797    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:10:00.318810    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:10:00.331039    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:10:00.331052    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:10:00.342476    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:10:00.342489    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:10:00.367052    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:10:00.367061    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:10:00.378163    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:10:00.378175    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:10:00.394325    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:10:00.394335    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:10:00.405776    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:10:00.405785    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:10:00.420875    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:10:00.420885    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:10:00.438068    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:10:00.438077    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:10:00.449458    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:10:00.449471    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:10:00.486763    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:10:00.486771    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:10:00.491298    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:10:00.491306    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:10:00.505314    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:10:00.505327    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:10:00.522883    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:10:00.522894    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:10:00.593335    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:10:00.593348    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:10:00.621026    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:10:00.621038    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:10:03.135049    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:10:08.138016    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:10:08.138559    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:10:08.178960    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:10:08.179127    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:10:08.201288    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:10:08.201428    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:10:08.217641    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:10:08.217734    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:10:08.229577    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:10:08.229662    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:10:08.241155    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:10:08.241229    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:10:08.255078    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:10:08.255155    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:10:08.265902    4385 logs.go:276] 0 containers: []
	W0924 12:10:08.265915    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:10:08.265988    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:10:08.276784    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:10:08.276802    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:10:08.276808    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:10:08.281021    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:10:08.281030    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:10:08.296438    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:10:08.296451    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:10:08.322491    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:10:08.322501    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:10:08.360256    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:10:08.360262    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:10:08.398589    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:10:08.398602    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:10:08.411688    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:10:08.411699    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:10:08.423606    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:10:08.423621    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:10:08.435430    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:10:08.435443    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:10:08.450732    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:10:08.450741    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:10:08.467833    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:10:08.467842    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:10:08.478973    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:10:08.478983    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:10:08.492837    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:10:08.492847    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:10:08.517233    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:10:08.517244    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:10:08.531291    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:10:08.531301    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:10:08.542580    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:10:08.542593    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:10:08.553723    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:10:08.553735    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:10:11.067484    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:10:16.069925    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:10:16.070410    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:10:16.105243    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:10:16.105381    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:10:16.124869    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:10:16.125011    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:10:16.141151    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:10:16.141247    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:10:16.152932    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:10:16.153001    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:10:16.163649    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:10:16.163738    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:10:16.173812    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:10:16.173906    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:10:16.184659    4385 logs.go:276] 0 containers: []
	W0924 12:10:16.184670    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:10:16.184741    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:10:16.195615    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:10:16.195631    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:10:16.195636    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:10:16.210365    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:10:16.210379    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:10:16.227718    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:10:16.227728    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:10:16.239117    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:10:16.239127    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:10:16.250340    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:10:16.250354    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:10:16.286505    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:10:16.286520    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:10:16.301003    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:10:16.301016    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:10:16.311944    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:10:16.311955    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:10:16.323543    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:10:16.323553    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:10:16.327916    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:10:16.327923    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:10:16.341370    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:10:16.341378    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:10:16.356123    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:10:16.356132    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:10:16.367540    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:10:16.367550    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:10:16.394100    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:10:16.394110    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:10:16.432008    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:10:16.432022    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:10:16.457985    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:10:16.458000    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:10:16.470404    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:10:16.470414    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:10:18.984706    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:10:23.987358    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:10:23.987649    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:10:24.010642    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:10:24.010789    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:10:24.026762    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:10:24.026850    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:10:24.040173    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:10:24.040247    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:10:24.051106    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:10:24.051194    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:10:24.061903    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:10:24.061985    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:10:24.072739    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:10:24.072825    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:10:24.082804    4385 logs.go:276] 0 containers: []
	W0924 12:10:24.082818    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:10:24.082892    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:10:24.093223    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:10:24.093242    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:10:24.093248    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:10:24.107492    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:10:24.107501    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:10:24.118931    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:10:24.118942    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:10:24.130629    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:10:24.130638    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:10:24.143325    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:10:24.143341    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:10:24.154777    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:10:24.154791    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:10:24.159441    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:10:24.159450    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:10:24.195403    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:10:24.195415    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:10:24.220709    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:10:24.220718    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:10:24.234448    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:10:24.234459    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:10:24.247643    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:10:24.247654    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:10:24.260693    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:10:24.260705    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:10:24.298153    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:10:24.298163    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:10:24.312195    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:10:24.312209    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:10:24.326667    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:10:24.326678    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:10:24.338356    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:10:24.338369    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:10:24.361555    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:10:24.361565    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:10:26.889544    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:10:31.892044    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:10:31.892642    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:10:31.932335    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:10:31.932504    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:10:31.953748    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:10:31.953891    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:10:31.969143    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:10:31.969241    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:10:31.982000    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:10:31.982089    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:10:31.992670    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:10:31.992748    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:10:32.012331    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:10:32.012421    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:10:32.027400    4385 logs.go:276] 0 containers: []
	W0924 12:10:32.027412    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:10:32.027488    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:10:32.038136    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:10:32.038152    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:10:32.038157    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:10:32.051163    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:10:32.051174    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:10:32.066242    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:10:32.066255    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:10:32.078708    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:10:32.078721    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:10:32.103718    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:10:32.103728    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:10:32.117846    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:10:32.117858    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:10:32.132389    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:10:32.132402    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:10:32.168313    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:10:32.168323    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:10:32.172440    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:10:32.172450    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:10:32.183628    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:10:32.183641    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:10:32.195447    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:10:32.195460    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:10:32.213514    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:10:32.213528    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:10:32.225377    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:10:32.225386    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:10:32.237074    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:10:32.237090    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:10:32.263230    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:10:32.263240    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:10:32.296272    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:10:32.296284    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:10:32.310116    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:10:32.310127    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:10:34.823430    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:10:39.826023    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:10:39.826170    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:10:39.839358    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:10:39.839442    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:10:39.850804    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:10:39.850888    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:10:39.861422    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:10:39.861491    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:10:39.872136    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:10:39.872215    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:10:39.882145    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:10:39.882214    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:10:39.896420    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:10:39.896502    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:10:39.906170    4385 logs.go:276] 0 containers: []
	W0924 12:10:39.906185    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:10:39.906248    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:10:39.916901    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:10:39.916920    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:10:39.916926    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:10:39.928415    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:10:39.928426    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:10:39.959730    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:10:39.959742    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:10:39.976115    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:10:39.976125    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:10:39.991059    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:10:39.991069    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:10:40.009734    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:10:40.009744    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:10:40.026512    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:10:40.026522    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:10:40.030757    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:10:40.030765    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:10:40.044658    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:10:40.044668    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:10:40.070170    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:10:40.070177    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:10:40.081652    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:10:40.081661    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:10:40.098852    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:10:40.098861    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:10:40.109947    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:10:40.109958    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:10:40.145441    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:10:40.145450    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:10:40.180715    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:10:40.180726    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:10:40.195524    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:10:40.195534    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:10:40.206499    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:10:40.206511    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:10:42.719638    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:10:47.721519    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:10:47.721950    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:10:47.755968    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:10:47.756124    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:10:47.776168    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:10:47.776301    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:10:47.795488    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:10:47.795592    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:10:47.807260    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:10:47.807342    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:10:47.817887    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:10:47.817972    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:10:47.828468    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:10:47.828552    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:10:47.841343    4385 logs.go:276] 0 containers: []
	W0924 12:10:47.841358    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:10:47.841426    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:10:47.855976    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:10:47.856000    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:10:47.856005    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:10:47.893587    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:10:47.893595    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:10:47.927816    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:10:47.927827    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:10:47.950980    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:10:47.950993    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:10:47.968397    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:10:47.968410    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:10:47.980213    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:10:47.980224    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:10:47.992736    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:10:47.992747    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:10:48.007342    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:10:48.007355    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:10:48.022912    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:10:48.022922    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:10:48.027629    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:10:48.027636    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:10:48.051836    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:10:48.051848    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:10:48.063239    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:10:48.063249    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:10:48.074728    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:10:48.074742    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:10:48.086497    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:10:48.086512    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:10:48.106565    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:10:48.106579    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:10:48.120668    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:10:48.120679    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:10:48.132894    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:10:48.132905    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:10:50.661317    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:10:55.664117    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:10:55.664600    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:10:55.722139    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:10:55.722279    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:10:55.743772    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:10:55.743871    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:10:55.756320    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:10:55.756403    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:10:55.767804    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:10:55.767887    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:10:55.779167    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:10:55.779244    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:10:55.790060    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:10:55.790147    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:10:55.804399    4385 logs.go:276] 0 containers: []
	W0924 12:10:55.804413    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:10:55.804494    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:10:55.815842    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:10:55.815859    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:10:55.815865    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:10:55.819981    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:10:55.819990    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:10:55.834193    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:10:55.834204    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:10:55.848774    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:10:55.848786    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:10:55.860698    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:10:55.860712    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:10:55.886518    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:10:55.886529    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:10:55.897861    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:10:55.897870    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:10:55.935567    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:10:55.935575    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:10:55.949727    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:10:55.949737    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:10:55.964702    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:10:55.964713    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:10:55.998567    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:10:55.998582    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:10:56.023223    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:10:56.023238    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:10:56.039004    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:10:56.039015    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:10:56.050838    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:10:56.050849    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:10:56.065670    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:10:56.065682    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:10:56.083723    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:10:56.083740    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:10:56.095678    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:10:56.095690    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:10:58.611900    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:11:03.612876    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:11:03.613202    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:11:03.640303    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:11:03.640449    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:11:03.659362    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:11:03.659467    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:11:03.673392    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:11:03.673479    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:11:03.685469    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:11:03.685553    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:11:03.696307    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:11:03.696386    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:11:03.707190    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:11:03.707270    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:11:03.719657    4385 logs.go:276] 0 containers: []
	W0924 12:11:03.719669    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:11:03.719732    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:11:03.731020    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:11:03.731041    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:11:03.731047    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:11:03.749215    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:11:03.749225    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:11:03.762773    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:11:03.762784    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:11:03.775718    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:11:03.775730    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:11:03.801635    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:11:03.801655    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:11:03.815067    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:11:03.815081    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:11:03.827263    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:11:03.827276    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:11:03.853370    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:11:03.853385    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:11:03.869192    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:11:03.869208    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:11:03.884643    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:11:03.884664    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:11:03.896346    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:11:03.896357    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:11:03.900685    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:11:03.900691    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:11:03.936278    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:11:03.936295    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:11:03.951228    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:11:03.951243    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:11:03.963252    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:11:03.963262    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:11:03.975772    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:11:03.975784    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:11:03.987545    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:11:03.987557    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:11:06.528138    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:11:11.530773    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:11:11.530952    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:11:11.542702    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:11:11.542791    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:11:11.553546    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:11:11.553638    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:11:11.564158    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:11:11.564240    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:11:11.574826    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:11:11.574911    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:11:11.585624    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:11:11.585706    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:11:11.596946    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:11:11.597023    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:11:11.607345    4385 logs.go:276] 0 containers: []
	W0924 12:11:11.607357    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:11:11.607427    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:11:11.618533    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:11:11.618552    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:11:11.618560    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:11:11.633498    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:11:11.633515    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:11:11.658064    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:11:11.658078    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:11:11.696730    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:11:11.696738    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:11:11.700946    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:11:11.700955    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:11:11.738669    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:11:11.738680    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:11:11.754983    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:11:11.754994    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:11:11.769910    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:11:11.769926    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:11:11.781415    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:11:11.781427    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:11:11.797557    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:11:11.797570    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:11:11.811388    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:11:11.811398    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:11:11.826168    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:11:11.826178    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:11:11.837773    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:11:11.837786    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:11:11.863600    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:11:11.863608    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:11:11.875477    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:11:11.875488    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:11:11.888917    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:11:11.888927    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:11:11.907156    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:11:11.907165    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:11:14.421683    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:11:19.423857    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:11:19.423974    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:11:19.434796    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:11:19.434866    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:11:19.445224    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:11:19.445307    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:11:19.455482    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:11:19.455556    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:11:19.465911    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:11:19.465980    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:11:19.476018    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:11:19.476101    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:11:19.486213    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:11:19.486302    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:11:19.496040    4385 logs.go:276] 0 containers: []
	W0924 12:11:19.496052    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:11:19.496108    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:11:19.509946    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:11:19.509968    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:11:19.509974    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:11:19.524784    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:11:19.524795    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:11:19.536359    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:11:19.536370    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:11:19.571871    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:11:19.571879    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:11:19.605250    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:11:19.605265    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:11:19.619835    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:11:19.619846    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:11:19.631422    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:11:19.631433    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:11:19.642839    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:11:19.642850    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:11:19.656995    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:11:19.657009    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:11:19.670585    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:11:19.670595    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:11:19.681843    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:11:19.681855    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:11:19.704558    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:11:19.704568    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:11:19.729561    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:11:19.729572    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:11:19.733798    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:11:19.733804    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:11:19.758948    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:11:19.758958    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:11:19.770649    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:11:19.770660    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:11:19.784737    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:11:19.784748    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:11:22.298315    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:11:27.300927    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:11:27.301063    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:11:27.317255    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:11:27.317349    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:11:27.330882    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:11:27.330969    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:11:27.350685    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:11:27.350779    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:11:27.364156    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:11:27.364253    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:11:27.376150    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:11:27.376250    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:11:27.390332    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:11:27.390521    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:11:27.402808    4385 logs.go:276] 0 containers: []
	W0924 12:11:27.402823    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:11:27.402889    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:11:27.415033    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:11:27.415053    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:11:27.415059    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:11:27.454923    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:11:27.454946    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:11:27.470477    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:11:27.470491    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:11:27.483971    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:11:27.483985    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:11:27.500578    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:11:27.500594    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:11:27.519127    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:11:27.519140    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:11:27.538862    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:11:27.538880    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:11:27.552193    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:11:27.552206    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:11:27.579273    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:11:27.579303    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:11:27.584833    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:11:27.584846    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:11:27.605486    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:11:27.605498    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:11:27.618770    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:11:27.618782    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:11:27.659294    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:11:27.659308    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:11:27.692692    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:11:27.692718    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:11:27.708751    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:11:27.708772    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:11:27.725810    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:11:27.725825    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:11:27.739053    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:11:27.739065    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:11:30.254977    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:11:35.257684    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:11:35.257847    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:11:35.271385    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:11:35.271477    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:11:35.282830    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:11:35.282910    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:11:35.296971    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:11:35.297059    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:11:35.309079    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:11:35.309162    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:11:35.319601    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:11:35.319677    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:11:35.330764    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:11:35.330843    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:11:35.340674    4385 logs.go:276] 0 containers: []
	W0924 12:11:35.340684    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:11:35.340749    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:11:35.350933    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:11:35.350952    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:11:35.350959    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:11:35.364990    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:11:35.364999    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:11:35.383590    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:11:35.383605    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:11:35.408526    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:11:35.408534    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:11:35.420178    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:11:35.420189    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:11:35.457373    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:11:35.457380    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:11:35.491180    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:11:35.491191    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:11:35.505102    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:11:35.505117    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:11:35.516140    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:11:35.516151    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:11:35.528246    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:11:35.528261    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:11:35.539968    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:11:35.539980    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:11:35.545359    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:11:35.545367    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:11:35.559210    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:11:35.559222    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:11:35.570968    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:11:35.570979    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:11:35.604908    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:11:35.604919    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:11:35.619762    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:11:35.619772    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:11:35.631565    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:11:35.631577    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:11:38.146032    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:11:43.146535    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:11:43.146657    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:11:43.158429    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:11:43.158519    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:11:43.174138    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:11:43.174229    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:11:43.189773    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:11:43.189855    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:11:43.205730    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:11:43.205809    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:11:43.216806    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:11:43.216892    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:11:43.233176    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:11:43.233259    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:11:43.245526    4385 logs.go:276] 0 containers: []
	W0924 12:11:43.245542    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:11:43.245616    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:11:43.262858    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:11:43.262878    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:11:43.262884    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:11:43.282120    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:11:43.282131    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:11:43.307448    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:11:43.307465    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:11:43.347024    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:11:43.347047    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:11:43.389513    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:11:43.389528    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:11:43.405968    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:11:43.405987    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:11:43.418296    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:11:43.418310    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:11:43.431211    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:11:43.431226    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:11:43.450280    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:11:43.450299    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:11:43.463183    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:11:43.463196    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:11:43.476566    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:11:43.476578    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:11:43.480796    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:11:43.480805    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:11:43.511669    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:11:43.511680    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:11:43.528483    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:11:43.528496    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:11:43.542057    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:11:43.542070    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:11:43.555552    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:11:43.555566    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:11:43.573312    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:11:43.573325    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:11:46.092351    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:11:51.094519    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:11:51.095078    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:11:51.139401    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:11:51.139560    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:11:51.157237    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:11:51.157342    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:11:51.171453    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:11:51.171548    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:11:51.183333    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:11:51.183407    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:11:51.195101    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:11:51.195192    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:11:51.206453    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:11:51.206531    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:11:51.217133    4385 logs.go:276] 0 containers: []
	W0924 12:11:51.217145    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:11:51.217228    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:11:51.228118    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:11:51.228135    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:11:51.228143    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:11:51.267825    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:11:51.267838    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:11:51.281990    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:11:51.282000    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:11:51.294110    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:11:51.294121    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:11:51.332470    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:11:51.332480    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:11:51.345968    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:11:51.345984    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:11:51.361002    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:11:51.361014    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:11:51.378536    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:11:51.378552    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:11:51.392431    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:11:51.392441    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:11:51.417154    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:11:51.417164    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:11:51.439107    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:11:51.439118    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:11:51.450955    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:11:51.450966    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:11:51.465299    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:11:51.465309    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:11:51.488670    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:11:51.488677    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:11:51.492603    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:11:51.492608    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:11:51.506888    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:11:51.506898    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:11:51.519010    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:11:51.519025    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:11:54.031005    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:11:59.031890    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:11:59.032192    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:11:59.054826    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:11:59.054970    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:11:59.077780    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:11:59.077870    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:11:59.089404    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:11:59.089483    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:11:59.100228    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:11:59.100300    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:11:59.111817    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:11:59.111900    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:11:59.122263    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:11:59.122343    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:11:59.133026    4385 logs.go:276] 0 containers: []
	W0924 12:11:59.133038    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:11:59.133112    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:11:59.143728    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:11:59.143750    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:11:59.143756    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:11:59.148399    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:11:59.148407    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:11:59.165922    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:11:59.165932    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:11:59.201270    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:11:59.201282    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:11:59.226347    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:11:59.226357    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:11:59.240532    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:11:59.240549    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:11:59.259217    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:11:59.259227    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:11:59.274188    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:11:59.274199    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:11:59.289893    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:11:59.289903    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:11:59.312475    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:11:59.312482    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:11:59.325803    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:11:59.325818    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:11:59.338272    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:11:59.338285    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:11:59.374368    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:11:59.374379    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:11:59.388788    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:11:59.388814    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:11:59.406015    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:11:59.406028    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:11:59.417357    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:11:59.417370    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:11:59.431257    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:11:59.431267    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:01.944286    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:06.945692    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:06.946723    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:06.959203    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:06.959302    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:06.971288    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:06.971388    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:06.983594    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:06.983690    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:06.995904    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:06.995995    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:07.008386    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:07.008479    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:07.020975    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:07.021062    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:07.034801    4385 logs.go:276] 0 containers: []
	W0924 12:12:07.034817    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:07.034896    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:07.046807    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:07.046832    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:07.046838    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:07.071318    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:07.071342    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:07.093936    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:07.093949    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:07.108575    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:07.108588    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:07.120201    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:07.120214    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:07.133310    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:07.133321    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:07.148655    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:07.148665    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:07.160395    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:07.160408    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:07.184077    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:07.184098    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:07.223004    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:07.223028    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:07.263886    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:07.263907    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:07.279881    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:07.279903    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:07.300496    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:07.300515    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:07.306114    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:07.306127    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:07.320383    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:07.320400    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:07.335183    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:07.335195    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:07.349242    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:07.349256    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:09.882250    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:14.884035    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:14.884229    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:14.898626    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:14.898718    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:14.910117    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:14.910200    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:14.920723    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:14.920807    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:14.931309    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:14.931385    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:14.941668    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:14.941752    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:14.951836    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:14.951906    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:14.961827    4385 logs.go:276] 0 containers: []
	W0924 12:12:14.961838    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:14.961913    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:14.977828    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:14.977848    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:14.977854    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:14.982092    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:14.982101    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:14.993849    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:14.993860    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:15.005165    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:15.005179    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:15.016731    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:15.016742    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:15.040225    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:15.040236    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:15.077740    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:15.077755    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:15.103217    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:15.103227    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:15.124418    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:15.124429    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:15.139426    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:15.139436    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:15.151246    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:15.151255    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:15.165127    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:15.165139    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:15.181079    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:15.181092    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:15.196616    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:15.196627    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:15.208342    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:15.208356    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:15.226687    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:15.226702    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:15.264437    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:15.264444    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:17.780036    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:22.781857    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:22.781980    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:22.793599    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:22.793710    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:22.804766    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:22.804870    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:22.821970    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:22.822047    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:22.833572    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:22.833655    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:22.844209    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:22.844286    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:22.855595    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:22.855679    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:22.867180    4385 logs.go:276] 0 containers: []
	W0924 12:12:22.867194    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:22.867263    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:22.878320    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:22.878339    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:22.878345    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:22.894473    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:22.894485    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:22.906510    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:22.906522    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:22.936877    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:22.936894    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:22.956217    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:22.956235    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:22.978794    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:22.978808    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:23.007886    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:23.007906    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:23.021110    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:23.021126    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:23.028531    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:23.028548    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:23.042955    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:23.042968    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:23.054977    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:23.054993    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:23.073490    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:23.073503    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:23.089188    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:23.089199    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:23.127280    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:23.127292    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:23.166887    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:23.166899    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:23.182121    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:23.182134    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:23.200475    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:23.200488    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:25.716841    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:30.718476    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:30.718606    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:30.731855    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:30.731945    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:30.742940    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:30.743028    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:30.754083    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:30.754167    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:30.765044    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:30.765127    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:30.775477    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:30.775560    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:30.786921    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:30.787008    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:30.797683    4385 logs.go:276] 0 containers: []
	W0924 12:12:30.797695    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:30.797767    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:30.808620    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:30.808638    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:30.808644    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:30.844587    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:30.844597    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:30.848751    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:30.848761    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:30.884416    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:30.884427    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:30.898774    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:30.898783    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:30.912559    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:30.912575    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:30.936252    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:30.936264    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:30.953875    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:30.953891    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:30.965678    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:30.965689    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:30.987887    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:30.987896    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:30.999200    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:30.999214    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:31.024201    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:31.024214    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:31.035898    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:31.035909    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:31.054045    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:31.054055    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:31.069816    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:31.069827    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:31.081212    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:31.081223    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:31.095174    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:31.095185    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:33.608910    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:38.609053    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:38.609275    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:38.623106    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:38.623208    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:38.634354    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:38.634451    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:38.647999    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:38.648077    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:38.658868    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:38.658965    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:38.669953    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:38.670030    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:38.682694    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:38.682770    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:38.692592    4385 logs.go:276] 0 containers: []
	W0924 12:12:38.692603    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:38.692668    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:38.703256    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:38.703276    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:38.703280    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:38.740457    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:38.740467    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:38.755245    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:38.755259    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:38.770700    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:38.770711    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:38.783527    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:38.783540    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:38.795245    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:38.795257    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:38.807216    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:38.807228    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:38.818790    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:38.818800    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:38.830643    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:38.830659    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:38.835529    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:38.835535    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:38.869502    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:38.869513    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:38.884150    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:38.884180    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:38.909334    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:38.909345    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:38.923598    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:38.923610    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:38.935488    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:38.935499    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:38.961020    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:38.961033    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:38.974574    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:38.974588    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:41.500025    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:46.502232    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:46.502503    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:46.522776    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:46.522897    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:46.536861    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:46.536962    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:46.549295    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:46.549372    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:46.559717    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:46.559809    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:46.570386    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:46.570470    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:46.580783    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:46.580855    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:46.594198    4385 logs.go:276] 0 containers: []
	W0924 12:12:46.594214    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:46.594290    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:46.604874    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:46.604893    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:46.604899    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:46.642004    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:46.642012    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:46.646346    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:46.646354    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:46.657721    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:46.657732    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:46.669575    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:46.669587    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:46.681020    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:46.681030    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:46.702914    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:46.702923    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:46.728275    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:46.728286    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:46.751014    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:46.751029    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:46.762449    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:46.762459    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:46.779812    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:46.779823    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:46.817984    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:46.818000    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:46.832147    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:46.832164    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:46.843405    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:46.843417    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:46.858061    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:46.858072    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:46.870428    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:46.870437    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:46.884836    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:46.884846    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:49.397335    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:54.399655    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:54.399987    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:54.426534    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:54.426678    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:54.443604    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:54.443718    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:54.458612    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:54.458704    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:54.470123    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:54.470207    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:54.481424    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:54.481506    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:54.492077    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:54.492154    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:54.502676    4385 logs.go:276] 0 containers: []
	W0924 12:12:54.502692    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:54.502754    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:54.513271    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:54.513291    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:54.513296    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:54.524864    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:54.524876    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:54.537391    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:54.537401    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:54.555651    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:54.555661    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:54.567881    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:54.567893    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:54.579626    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:54.579637    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:54.603032    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:54.603041    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:54.641263    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:54.641274    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:54.645842    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:54.645851    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:54.659830    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:54.659841    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:54.671349    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:54.671360    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:54.684948    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:54.684959    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:54.712580    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:54.712591    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:54.727346    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:54.727356    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:54.739284    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:54.739293    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:54.778218    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:54.778229    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:54.792905    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:54.792916    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:57.305882    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:02.308184    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:02.308254    4385 kubeadm.go:597] duration metric: took 4m5.106411458s to restartPrimaryControlPlane
	W0924 12:13:02.308302    4385 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 12:13:02.308324    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0924 12:13:03.310945    4385 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002576708s)
	I0924 12:13:03.311005    4385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 12:13:03.316345    4385 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 12:13:03.319404    4385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 12:13:03.322469    4385 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 12:13:03.322478    4385 kubeadm.go:157] found existing configuration files:
	
	I0924 12:13:03.322511    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf
	I0924 12:13:03.325320    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 12:13:03.325347    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 12:13:03.328059    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf
	I0924 12:13:03.330887    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 12:13:03.330918    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 12:13:03.333985    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf
	I0924 12:13:03.336525    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 12:13:03.336554    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 12:13:03.339317    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf
	I0924 12:13:03.342336    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 12:13:03.342364    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 12:13:03.345278    4385 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 12:13:03.362703    4385 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0924 12:13:03.362762    4385 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 12:13:03.415149    4385 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 12:13:03.415206    4385 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 12:13:03.415276    4385 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 12:13:03.463702    4385 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 12:13:03.467843    4385 out.go:235]   - Generating certificates and keys ...
	I0924 12:13:03.467874    4385 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 12:13:03.467911    4385 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 12:13:03.467960    4385 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 12:13:03.467996    4385 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 12:13:03.468039    4385 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 12:13:03.468067    4385 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 12:13:03.468099    4385 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 12:13:03.468132    4385 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 12:13:03.468178    4385 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 12:13:03.468218    4385 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 12:13:03.468238    4385 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 12:13:03.468273    4385 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 12:13:03.597373    4385 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 12:13:03.791902    4385 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 12:13:03.849005    4385 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 12:13:03.902909    4385 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 12:13:03.932475    4385 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 12:13:03.933071    4385 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 12:13:03.933095    4385 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 12:13:04.023785    4385 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 12:13:04.027021    4385 out.go:235]   - Booting up control plane ...
	I0924 12:13:04.027069    4385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 12:13:04.027107    4385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 12:13:04.027146    4385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 12:13:04.027194    4385 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 12:13:04.027272    4385 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 12:13:08.528646    4385 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504236 seconds
	I0924 12:13:08.528786    4385 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 12:13:08.534052    4385 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 12:13:09.047772    4385 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 12:13:09.048021    4385 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-070000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 12:13:09.551682    4385 kubeadm.go:310] [bootstrap-token] Using token: ow9nvg.bt83dtd7nvqad9oo
	I0924 12:13:09.556886    4385 out.go:235]   - Configuring RBAC rules ...
	I0924 12:13:09.556956    4385 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 12:13:09.557006    4385 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 12:13:09.562942    4385 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 12:13:09.563722    4385 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 12:13:09.564493    4385 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 12:13:09.565399    4385 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 12:13:09.568586    4385 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 12:13:09.732064    4385 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 12:13:09.955723    4385 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 12:13:09.957923    4385 kubeadm.go:310] 
	I0924 12:13:09.957961    4385 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 12:13:09.957966    4385 kubeadm.go:310] 
	I0924 12:13:09.958004    4385 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 12:13:09.958009    4385 kubeadm.go:310] 
	I0924 12:13:09.958021    4385 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 12:13:09.958053    4385 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 12:13:09.958089    4385 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 12:13:09.958094    4385 kubeadm.go:310] 
	I0924 12:13:09.958120    4385 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 12:13:09.958145    4385 kubeadm.go:310] 
	I0924 12:13:09.958241    4385 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 12:13:09.958247    4385 kubeadm.go:310] 
	I0924 12:13:09.958284    4385 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 12:13:09.958332    4385 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 12:13:09.958378    4385 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 12:13:09.958383    4385 kubeadm.go:310] 
	I0924 12:13:09.958451    4385 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 12:13:09.958504    4385 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 12:13:09.958509    4385 kubeadm.go:310] 
	I0924 12:13:09.958548    4385 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ow9nvg.bt83dtd7nvqad9oo \
	I0924 12:13:09.958605    4385 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4250e15ce19ea6ee8d936fb77d1a59ad22f9367fb00a8a9aa9e1b7fb7d1933b3 \
	I0924 12:13:09.958623    4385 kubeadm.go:310] 	--control-plane 
	I0924 12:13:09.958627    4385 kubeadm.go:310] 
	I0924 12:13:09.958670    4385 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 12:13:09.958674    4385 kubeadm.go:310] 
	I0924 12:13:09.958733    4385 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ow9nvg.bt83dtd7nvqad9oo \
	I0924 12:13:09.958791    4385 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4250e15ce19ea6ee8d936fb77d1a59ad22f9367fb00a8a9aa9e1b7fb7d1933b3 
	I0924 12:13:09.958854    4385 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 12:13:09.958864    4385 cni.go:84] Creating CNI manager for ""
	I0924 12:13:09.958871    4385 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:13:09.963048    4385 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 12:13:09.969971    4385 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 12:13:09.972971    4385 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 12:13:09.977683    4385 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 12:13:09.977742    4385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 12:13:09.977836    4385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-070000 minikube.k8s.io/updated_at=2024_09_24T12_13_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=running-upgrade-070000 minikube.k8s.io/primary=true
	I0924 12:13:09.981456    4385 ops.go:34] apiserver oom_adj: -16
	I0924 12:13:10.020190    4385 kubeadm.go:1113] duration metric: took 42.486875ms to wait for elevateKubeSystemPrivileges
	I0924 12:13:10.020263    4385 kubeadm.go:394] duration metric: took 4m12.833217709s to StartCluster
	I0924 12:13:10.020276    4385 settings.go:142] acquiring lock: {Name:mk8f5a1e4973fb47308ad8c9735bcc716ada1e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:13:10.020365    4385 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:13:10.020784    4385 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/kubeconfig: {Name:mk406b8f0f5e016c0aa63af8364801bb91be8bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:13:10.020995    4385 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:13:10.021000    4385 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 12:13:10.021033    4385 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-070000"
	I0924 12:13:10.021041    4385 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-070000"
	W0924 12:13:10.021044    4385 addons.go:243] addon storage-provisioner should already be in state true
	I0924 12:13:10.021061    4385 host.go:66] Checking if "running-upgrade-070000" exists ...
	I0924 12:13:10.021088    4385 config.go:182] Loaded profile config "running-upgrade-070000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:13:10.021073    4385 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-070000"
	I0924 12:13:10.021105    4385 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-070000"
	I0924 12:13:10.021917    4385 kapi.go:59] client config for running-upgrade-070000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/client.key", CAFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10420a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 12:13:10.022035    4385 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-070000"
	W0924 12:13:10.022040    4385 addons.go:243] addon default-storageclass should already be in state true
	I0924 12:13:10.022047    4385 host.go:66] Checking if "running-upgrade-070000" exists ...
	I0924 12:13:10.023998    4385 out.go:177] * Verifying Kubernetes components...
	I0924 12:13:10.024362    4385 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 12:13:10.028375    4385 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 12:13:10.028382    4385 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/running-upgrade-070000/id_rsa Username:docker}
	I0924 12:13:10.031956    4385 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:13:10.036022    4385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:13:10.040072    4385 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 12:13:10.040079    4385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 12:13:10.040085    4385 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/running-upgrade-070000/id_rsa Username:docker}
	I0924 12:13:10.126550    4385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 12:13:10.131236    4385 api_server.go:52] waiting for apiserver process to appear ...
	I0924 12:13:10.131280    4385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:13:10.135474    4385 api_server.go:72] duration metric: took 114.4705ms to wait for apiserver process to appear ...
	I0924 12:13:10.135480    4385 api_server.go:88] waiting for apiserver healthz status ...
	I0924 12:13:10.135488    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:10.145595    4385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 12:13:10.204192    4385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 12:13:10.472741    4385 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 12:13:10.472753    4385 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 12:13:15.137545    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:15.137608    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:20.137960    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:20.138008    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:25.138358    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:25.138397    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:30.138892    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:30.138937    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:35.139563    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:35.139622    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:40.140449    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:40.140472    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0924 12:13:40.474957    4385 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0924 12:13:40.479263    4385 out.go:177] * Enabled addons: storage-provisioner
	I0924 12:13:40.487139    4385 addons.go:510] duration metric: took 30.466380125s for enable addons: enabled=[storage-provisioner]
	I0924 12:13:45.141465    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:45.141510    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:50.142951    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:50.143017    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:55.144714    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:55.144757    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:00.146856    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:00.146881    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:05.147994    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:05.148021    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:10.150192    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:10.150331    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:10.161194    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:10.161276    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:10.171294    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:10.171379    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:10.189919    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:10.190003    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:10.200204    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:10.200289    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:10.210857    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:10.210944    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:10.221089    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:10.221163    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:10.231036    4385 logs.go:276] 0 containers: []
	W0924 12:14:10.231047    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:10.231119    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:10.241271    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:10.241287    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:10.241293    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:10.276864    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:10.276874    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:10.292230    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:10.292240    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:10.304260    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:10.304270    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:10.319381    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:10.319394    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:10.334965    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:10.334979    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:10.358551    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:10.358558    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:10.393732    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:10.393739    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:10.398010    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:10.398019    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:10.412070    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:10.412081    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:10.423911    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:10.423923    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:10.435303    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:10.435313    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:10.453355    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:10.453366    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:12.967049    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:17.968510    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:17.968706    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:17.983626    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:17.983717    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:17.995927    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:17.995997    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:18.006817    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:18.006912    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:18.016908    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:18.016987    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:18.027385    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:18.027472    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:18.037988    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:18.038069    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:18.048039    4385 logs.go:276] 0 containers: []
	W0924 12:14:18.048057    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:18.048134    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:18.058428    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:18.058442    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:18.058447    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:18.093182    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:18.093190    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:18.128764    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:18.128779    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:18.143045    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:18.143058    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:18.160833    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:18.160848    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:18.174534    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:18.174550    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:18.195909    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:18.195922    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:18.200512    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:18.200519    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:18.212247    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:18.212260    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:18.232880    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:18.232893    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:18.244767    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:18.244779    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:18.256333    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:18.256348    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:18.281854    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:18.281866    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:20.795698    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:25.798024    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:25.798188    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:25.812497    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:25.812594    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:25.824703    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:25.824788    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:25.835843    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:25.835931    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:25.846591    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:25.846666    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:25.857289    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:25.857369    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:25.868378    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:25.868455    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:25.878277    4385 logs.go:276] 0 containers: []
	W0924 12:14:25.878289    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:25.878356    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:25.888460    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:25.888478    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:25.888484    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:25.904005    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:25.904016    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:25.920438    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:25.920451    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:25.939122    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:25.939132    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:25.951162    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:25.951176    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:25.966268    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:25.966280    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:25.971229    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:25.971236    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:25.985779    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:25.985790    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:26.002896    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:26.002909    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:26.014140    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:26.014153    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:26.026197    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:26.026208    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:26.050206    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:26.050227    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:26.084560    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:26.084574    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:28.622767    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:33.625208    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:33.625674    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:33.665999    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:33.666164    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:33.688931    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:33.689077    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:33.704423    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:33.704523    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:33.717563    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:33.717654    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:33.728197    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:33.728281    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:33.738701    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:33.738782    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:33.748889    4385 logs.go:276] 0 containers: []
	W0924 12:14:33.748903    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:33.748975    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:33.759326    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:33.759344    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:33.759350    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:33.773789    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:33.773803    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:33.785024    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:33.785039    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:33.796744    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:33.796755    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:33.810874    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:33.810885    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:33.828241    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:33.828253    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:33.853062    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:33.853072    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:33.886465    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:33.886474    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:33.923033    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:33.923047    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:33.935092    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:33.935105    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:33.949963    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:33.949977    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:33.963710    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:33.963721    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:33.968727    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:33.968734    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:36.484755    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:41.487113    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:41.487622    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:41.524164    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:41.524325    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:41.546153    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:41.546287    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:41.560640    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:41.560730    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:41.572409    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:41.572483    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:41.583090    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:41.583162    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:41.594401    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:41.594485    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:41.605140    4385 logs.go:276] 0 containers: []
	W0924 12:14:41.605151    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:41.605225    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:41.615415    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:41.615429    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:41.615435    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:41.633154    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:41.633166    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:41.657957    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:41.657968    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:41.669569    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:41.669582    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:41.705029    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:41.705043    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:41.719317    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:41.719327    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:41.734041    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:41.734056    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:41.746191    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:41.746201    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:41.760364    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:41.760375    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:41.794837    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:41.794844    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:41.799506    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:41.799513    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:41.811107    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:41.811119    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:41.822702    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:41.822712    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:44.339838    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:49.342100    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:49.342349    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:49.369787    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:49.369889    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:49.383384    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:49.383475    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:49.394712    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:49.394801    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:49.405074    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:49.405161    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:49.415887    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:49.415963    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:49.426388    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:49.426476    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:49.436174    4385 logs.go:276] 0 containers: []
	W0924 12:14:49.436187    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:49.436254    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:49.446410    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:49.446425    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:49.446430    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:49.459206    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:49.459216    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:49.493190    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:49.493198    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:49.497487    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:49.497495    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:49.509304    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:49.509314    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:49.523977    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:49.523988    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:49.541446    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:49.541460    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:49.566027    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:49.566038    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:49.602201    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:49.602215    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:49.616768    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:49.616779    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:49.630480    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:49.630494    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:49.642125    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:49.642137    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:49.654050    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:49.654062    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:52.166282    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:57.168558    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:57.168822    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:57.187484    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:57.187592    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:57.201702    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:57.201784    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:57.213177    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:57.213268    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:57.223722    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:57.223801    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:57.233926    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:57.234011    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:57.244748    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:57.244837    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:57.254527    4385 logs.go:276] 0 containers: []
	W0924 12:14:57.254540    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:57.254611    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:57.264770    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:57.264785    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:57.264791    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:57.299395    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:57.299404    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:57.337420    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:57.337430    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:57.351701    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:57.351712    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:57.365923    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:57.365934    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:57.377052    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:57.377063    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:57.389342    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:57.389357    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:57.404422    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:57.404433    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:57.416753    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:57.416768    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:57.428788    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:57.428799    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:57.453299    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:57.453309    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:57.457503    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:57.457512    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:57.474127    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:57.474139    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:59.987937    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:04.988547    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:04.988691    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:05.002668    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:05.002766    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:05.014653    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:05.014732    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:05.032120    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:15:05.032203    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:05.042695    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:05.042780    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:05.053755    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:05.053827    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:05.066811    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:05.066878    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:05.077106    4385 logs.go:276] 0 containers: []
	W0924 12:15:05.077117    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:05.077174    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:05.087976    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:05.087991    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:05.087999    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:05.125096    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:05.125105    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:05.129319    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:05.129330    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:05.165265    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:05.165276    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:05.180164    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:05.180175    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:05.192263    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:05.192274    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:05.203919    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:05.203929    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:05.218176    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:05.218187    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:05.229500    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:05.229510    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:05.244054    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:05.244066    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:05.262397    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:05.262407    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:05.273899    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:05.273907    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:05.298894    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:05.298905    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:07.812666    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:12.814888    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:12.815163    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:12.839585    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:12.839744    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:12.855928    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:12.856019    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:12.868565    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:12.868658    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:12.881416    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:12.881494    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:12.891783    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:12.891867    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:12.902406    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:12.902493    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:12.913070    4385 logs.go:276] 0 containers: []
	W0924 12:15:12.913082    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:12.913155    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:12.923496    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:12.923525    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:12.923533    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:12.948159    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:12.948185    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:12.971638    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:12.971651    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:13.007485    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:13.007502    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:13.019674    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:13.019686    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:13.045781    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:13.045795    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:13.057266    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:13.057279    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:13.090244    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:13.090252    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:13.104923    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:13.104936    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:13.116772    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:13.116785    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:13.128333    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:13.128345    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:13.139809    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:13.139822    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:13.144516    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:13.144524    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:13.158714    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:13.158728    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:13.174436    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:13.174448    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:15.687199    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:20.689653    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:20.689865    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:20.707643    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:20.707755    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:20.722123    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:20.722214    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:20.733463    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:20.733556    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:20.744140    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:20.744226    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:20.755158    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:20.755235    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:20.765419    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:20.765506    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:20.775743    4385 logs.go:276] 0 containers: []
	W0924 12:15:20.775754    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:20.775824    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:20.789200    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:20.789218    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:20.789224    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:20.800876    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:20.800936    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:20.814034    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:20.814047    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:20.832536    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:20.832550    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:20.850112    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:20.850128    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:20.874269    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:20.874277    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:20.907183    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:20.907190    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:20.946189    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:20.946201    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:20.957908    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:20.957922    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:20.974702    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:20.974717    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:20.990013    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:20.990024    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:21.001465    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:21.001480    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:21.005937    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:21.005943    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:21.020086    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:21.020099    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:21.031422    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:21.031435    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:23.546485    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:28.548751    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:28.548872    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:28.561387    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:28.561480    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:28.572108    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:28.572193    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:28.582839    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:28.582931    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:28.593395    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:28.593475    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:28.603700    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:28.603775    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:28.614553    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:28.614636    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:28.624733    4385 logs.go:276] 0 containers: []
	W0924 12:15:28.624745    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:28.624820    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:28.635830    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:28.635849    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:28.635855    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:28.650482    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:28.650493    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:28.668170    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:28.668181    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:28.704004    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:28.704015    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:28.718527    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:28.718542    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:28.734715    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:28.734728    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:28.769097    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:28.769105    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:28.780314    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:28.780327    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:28.791642    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:28.791655    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:28.806266    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:28.806279    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:28.821325    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:28.821335    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:28.833046    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:28.833060    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:28.844225    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:28.844236    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:28.868841    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:28.868849    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:28.873096    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:28.873106    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:31.386730    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:36.389034    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:36.389262    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:36.413222    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:36.413347    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:36.428294    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:36.428390    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:36.441386    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:36.441483    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:36.452692    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:36.452787    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:36.463558    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:36.463634    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:36.473728    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:36.473799    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:36.483906    4385 logs.go:276] 0 containers: []
	W0924 12:15:36.483919    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:36.483991    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:36.498892    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:36.498909    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:36.498914    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:36.511010    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:36.511021    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:36.524346    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:36.524359    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:36.541504    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:36.541519    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:36.552935    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:36.552945    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:36.577652    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:36.577660    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:36.582015    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:36.582023    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:36.596181    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:36.596194    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:36.612389    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:36.612401    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:36.637242    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:36.637256    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:36.672369    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:36.672379    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:36.706731    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:36.706743    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:36.718699    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:36.718714    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:36.730953    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:36.730968    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:36.742965    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:36.742978    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:39.259657    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:44.261953    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:44.262103    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:44.279944    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:44.280044    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:44.293970    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:44.294059    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:44.306949    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:44.307058    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:44.317734    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:44.317822    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:44.327734    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:44.327818    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:44.338570    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:44.338658    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:44.349131    4385 logs.go:276] 0 containers: []
	W0924 12:15:44.349144    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:44.349220    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:44.359432    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:44.359448    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:44.359455    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:44.396998    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:44.397006    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:44.414696    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:44.414707    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:44.432896    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:44.432912    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:44.445185    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:44.445199    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:44.470316    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:44.470325    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:44.482207    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:44.482223    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:44.516729    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:44.516742    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:44.531291    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:44.531304    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:44.543305    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:44.543317    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:44.557484    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:44.557498    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:44.570979    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:44.570993    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:44.575369    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:44.575377    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:44.587162    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:44.587176    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:44.598526    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:44.598540    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:47.112083    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:52.114449    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:52.114721    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:52.139194    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:52.139316    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:52.155254    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:52.155356    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:52.168064    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:52.168158    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:52.179557    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:52.179633    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:52.190186    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:52.190273    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:52.200921    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:52.201002    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:52.210821    4385 logs.go:276] 0 containers: []
	W0924 12:15:52.210832    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:52.210932    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:52.221424    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:52.221444    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:52.221451    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:52.237789    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:52.237801    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:52.249057    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:52.249072    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:52.284348    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:52.284359    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:52.319348    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:52.319359    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:52.331755    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:52.331766    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:52.336578    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:52.336587    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:52.360584    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:52.360594    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:52.380953    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:52.380965    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:52.392639    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:52.392650    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:52.404816    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:52.404833    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:52.416733    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:52.416749    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:52.428247    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:52.428262    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:52.448461    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:52.448475    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:52.462162    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:52.462173    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:54.976490    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:59.977081    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:59.977213    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:59.993655    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:59.993744    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:00.006720    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:00.006808    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:00.017117    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:00.017210    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:00.030732    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:00.030809    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:00.041213    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:00.041299    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:00.051578    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:00.051652    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:00.062335    4385 logs.go:276] 0 containers: []
	W0924 12:16:00.062350    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:00.062426    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:00.073449    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:00.073465    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:00.073471    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:00.084952    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:00.084964    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:00.089649    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:00.089658    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:00.104616    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:00.104632    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:00.116331    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:00.116341    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:00.133791    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:00.133804    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:00.145707    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:00.145723    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:00.178905    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:00.178915    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:00.190457    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:00.190469    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:00.206309    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:00.206321    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:00.220238    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:00.220249    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:00.239673    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:00.239684    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:00.251105    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:00.251115    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:00.262587    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:00.262598    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:00.287873    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:00.287881    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:02.826060    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:07.828366    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:07.828601    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:07.843347    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:07.843448    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:07.855551    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:07.855627    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:07.866472    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:07.866551    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:07.877007    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:07.877081    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:07.887043    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:07.887121    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:07.897943    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:07.898027    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:07.920034    4385 logs.go:276] 0 containers: []
	W0924 12:16:07.920044    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:07.920107    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:07.934138    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:07.934159    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:07.934165    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:07.968054    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:07.968068    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:07.972359    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:07.972365    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:07.984174    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:07.984189    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:08.001095    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:08.001106    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:08.012874    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:08.012891    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:08.046742    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:08.046757    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:08.061905    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:08.061916    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:08.073806    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:08.073821    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:08.087724    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:08.087737    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:08.099253    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:08.099267    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:08.111095    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:08.111110    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:08.128599    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:08.128612    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:08.153523    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:08.153530    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:08.165591    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:08.165605    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:10.679584    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:15.682000    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:15.682268    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:15.704578    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:15.704720    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:15.720701    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:15.720799    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:15.733274    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:15.733366    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:15.745301    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:15.745378    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:15.760026    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:15.760106    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:15.770392    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:15.770475    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:15.780430    4385 logs.go:276] 0 containers: []
	W0924 12:16:15.780440    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:15.780503    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:15.790602    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:15.790624    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:15.790629    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:15.802302    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:15.802318    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:15.827437    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:15.827447    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:15.832047    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:15.832057    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:15.846473    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:15.846483    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:15.884385    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:15.884396    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:15.898823    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:15.898834    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:15.910698    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:15.910709    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:15.925668    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:15.925680    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:15.938259    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:15.938270    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:15.953615    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:15.953630    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:15.969439    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:15.969450    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:15.986947    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:15.986960    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:15.999670    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:15.999683    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:16.034782    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:16.034794    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:18.551388    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:23.553487    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:23.553722    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:23.574749    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:23.574853    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:23.587672    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:23.587761    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:23.598772    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:23.598862    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:23.609625    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:23.609710    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:23.620144    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:23.620226    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:23.630548    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:23.630625    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:23.640467    4385 logs.go:276] 0 containers: []
	W0924 12:16:23.640481    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:23.640558    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:23.651687    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:23.651709    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:23.651717    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:23.665697    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:23.665708    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:23.680085    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:23.680101    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:23.700621    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:23.700637    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:23.716292    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:23.716305    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:23.728471    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:23.728481    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:23.740087    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:23.740097    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:23.764880    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:23.764888    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:23.799161    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:23.799169    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:23.803521    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:23.803529    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:23.838465    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:23.838475    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:23.852929    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:23.852939    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:23.864983    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:23.864994    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:23.876904    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:23.876918    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:23.890714    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:23.890730    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:26.404726    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:31.407046    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:31.407355    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:31.430112    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:31.430256    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:31.446865    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:31.446959    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:31.459492    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:31.459575    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:31.472785    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:31.472869    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:31.485178    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:31.485269    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:31.496685    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:31.496820    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:31.508271    4385 logs.go:276] 0 containers: []
	W0924 12:16:31.508284    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:31.508360    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:31.519861    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:31.519880    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:31.519886    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:31.533087    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:31.533100    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:31.552989    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:31.553011    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:31.591919    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:31.591936    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:31.608592    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:31.608608    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:31.625615    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:31.625625    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:31.645224    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:31.645237    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:31.672793    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:31.672808    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:31.685647    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:31.685663    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:31.701093    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:31.701109    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:31.713805    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:31.713817    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:31.750379    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:31.750398    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:31.755178    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:31.755190    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:31.767830    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:31.767845    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:31.783831    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:31.783848    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:34.303897    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:39.306173    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:39.306486    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:39.331422    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:39.331557    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:39.349641    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:39.349739    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:39.362474    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:39.362567    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:39.373845    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:39.373930    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:39.384645    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:39.384729    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:39.395341    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:39.395430    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:39.405405    4385 logs.go:276] 0 containers: []
	W0924 12:16:39.405421    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:39.405490    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:39.415656    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:39.415674    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:39.415680    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:39.427438    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:39.427449    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:39.449138    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:39.449147    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:39.473937    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:39.473944    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:39.516632    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:39.516650    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:39.528414    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:39.528430    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:39.540326    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:39.540341    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:39.555101    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:39.555115    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:39.566640    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:39.566651    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:39.580969    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:39.580984    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:39.592496    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:39.592510    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:39.597249    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:39.597256    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:39.611391    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:39.611406    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:39.622858    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:39.622869    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:39.656607    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:39.656618    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:42.180505    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:47.182556    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:47.182726    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:47.195355    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:47.195445    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:47.206485    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:47.206581    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:47.219813    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:47.219898    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:47.230333    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:47.230412    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:47.243753    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:47.243838    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:47.254217    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:47.254299    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:47.264276    4385 logs.go:276] 0 containers: []
	W0924 12:16:47.264287    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:47.264359    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:47.274857    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:47.274877    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:47.274883    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:47.279330    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:47.279339    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:47.314456    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:47.314467    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:47.331882    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:47.331896    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:47.367079    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:47.367088    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:47.379270    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:47.379281    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:47.390876    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:47.390888    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:47.402308    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:47.402319    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:47.416594    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:47.416604    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:47.428471    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:47.428481    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:47.443027    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:47.443040    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:47.455121    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:47.455135    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:47.467269    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:47.467283    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:47.482373    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:47.482390    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:47.494366    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:47.494377    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:50.020109    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:55.022322    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:55.022488    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:55.041217    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:55.041313    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:55.055019    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:55.055113    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:55.066186    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:55.066268    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:55.076927    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:55.077017    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:55.087320    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:55.087406    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:55.097679    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:55.097757    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:55.108002    4385 logs.go:276] 0 containers: []
	W0924 12:16:55.108013    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:55.108085    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:55.118763    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:55.118791    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:55.118797    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:55.133328    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:55.133339    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:55.146697    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:55.146708    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:55.159058    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:55.159074    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:55.170703    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:55.170714    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:55.181862    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:55.181872    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:55.193281    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:55.193298    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:55.198179    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:55.198186    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:55.233277    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:55.233289    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:55.245705    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:55.245721    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:55.257673    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:55.257684    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:55.276660    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:55.276670    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:55.294457    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:55.294468    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:55.327990    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:55.327997    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:55.342602    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:55.342615    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:57.867982    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:02.870266    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:02.870401    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:17:02.881617    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:17:02.881706    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:17:02.892324    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:17:02.892415    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:17:02.903870    4385 logs.go:276] 4 containers: [2f88f1e45b5c 34ce64cc0e05 9cf23ff694c1 3768dd912d0b]
	I0924 12:17:02.903955    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:17:02.914219    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:17:02.914312    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:17:02.924864    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:17:02.924956    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:17:02.935492    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:17:02.935582    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:17:02.947491    4385 logs.go:276] 0 containers: []
	W0924 12:17:02.947504    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:17:02.947583    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:17:02.959420    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:17:02.959440    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:17:02.959447    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:17:02.993982    4385 logs.go:123] Gathering logs for coredns [34ce64cc0e05] ...
	I0924 12:17:02.993996    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ce64cc0e05"
	I0924 12:17:03.005752    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:17:03.005767    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:17:03.018860    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:17:03.018873    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:17:03.033051    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:17:03.033062    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:17:03.057830    4385 logs.go:123] Gathering logs for coredns [2f88f1e45b5c] ...
	I0924 12:17:03.057843    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f88f1e45b5c"
	I0924 12:17:03.071556    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:17:03.071568    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:17:03.088684    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:17:03.088700    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:17:03.104045    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:17:03.104055    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:17:03.138488    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:17:03.138503    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:17:03.153018    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:17:03.153029    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:17:03.165636    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:17:03.165647    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:17:03.183852    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:17:03.183863    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:17:03.188433    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:17:03.188440    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:17:03.203070    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:17:03.203081    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:17:05.717532    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:10.719786    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:10.724520    4385 out.go:201] 
	W0924 12:17:10.728505    4385 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0924 12:17:10.728510    4385 out.go:270] * 
	* 
	W0924 12:17:10.728931    4385 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:17:10.739464    4385 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-070000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-24 12:17:10.840522 -0700 PDT m=+3506.786633501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-070000 -n running-upgrade-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-070000 -n running-upgrade-070000: exit status 2 (15.635607834s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-070000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-171000          | force-systemd-flag-171000 | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-881000              | force-systemd-env-881000  | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-881000           | force-systemd-env-881000  | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT | 24 Sep 24 12:07 PDT |
	| start   | -p docker-flags-217000                | docker-flags-217000       | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-171000             | force-systemd-flag-171000 | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-171000          | force-systemd-flag-171000 | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT | 24 Sep 24 12:07 PDT |
	| start   | -p cert-expiration-844000             | cert-expiration-844000    | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-217000 ssh               | docker-flags-217000       | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-217000 ssh               | docker-flags-217000       | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-217000                | docker-flags-217000       | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT | 24 Sep 24 12:07 PDT |
	| start   | -p cert-options-628000                | cert-options-628000       | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-628000 ssh               | cert-options-628000       | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-628000 -- sudo        | cert-options-628000       | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-628000                | cert-options-628000       | jenkins | v1.34.0 | 24 Sep 24 12:07 PDT | 24 Sep 24 12:07 PDT |
	| start   | -p running-upgrade-070000             | minikube                  | jenkins | v1.26.0 | 24 Sep 24 12:07 PDT | 24 Sep 24 12:08 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-070000             | running-upgrade-070000    | jenkins | v1.34.0 | 24 Sep 24 12:08 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-844000             | cert-expiration-844000    | jenkins | v1.34.0 | 24 Sep 24 12:10 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-844000             | cert-expiration-844000    | jenkins | v1.34.0 | 24 Sep 24 12:10 PDT | 24 Sep 24 12:10 PDT |
	| start   | -p kubernetes-upgrade-799000          | kubernetes-upgrade-799000 | jenkins | v1.34.0 | 24 Sep 24 12:10 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-799000          | kubernetes-upgrade-799000 | jenkins | v1.34.0 | 24 Sep 24 12:10 PDT | 24 Sep 24 12:10 PDT |
	| start   | -p kubernetes-upgrade-799000          | kubernetes-upgrade-799000 | jenkins | v1.34.0 | 24 Sep 24 12:10 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-799000          | kubernetes-upgrade-799000 | jenkins | v1.34.0 | 24 Sep 24 12:11 PDT | 24 Sep 24 12:11 PDT |
	| start   | -p stopped-upgrade-164000             | minikube                  | jenkins | v1.26.0 | 24 Sep 24 12:11 PDT | 24 Sep 24 12:11 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-164000 stop           | minikube                  | jenkins | v1.26.0 | 24 Sep 24 12:11 PDT | 24 Sep 24 12:11 PDT |
	| start   | -p stopped-upgrade-164000             | stopped-upgrade-164000    | jenkins | v1.34.0 | 24 Sep 24 12:11 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 12:11:55
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 12:11:55.715126    4520 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:11:55.715296    4520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:11:55.715300    4520 out.go:358] Setting ErrFile to fd 2...
	I0924 12:11:55.715304    4520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:11:55.715463    4520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:11:55.716803    4520 out.go:352] Setting JSON to false
	I0924 12:11:55.736340    4520 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4286,"bootTime":1727200829,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:11:55.736416    4520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:11:55.740032    4520 out.go:177] * [stopped-upgrade-164000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:11:55.748751    4520 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:11:55.748814    4520 notify.go:220] Checking for updates...
	I0924 12:11:55.754710    4520 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:11:55.757658    4520 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:11:55.760614    4520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:11:55.763681    4520 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:11:55.766674    4520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:11:55.769886    4520 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:11:55.773667    4520 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 12:11:55.776669    4520 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:11:55.779731    4520 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:11:55.786634    4520 start.go:297] selected driver: qemu2
	I0924 12:11:55.786640    4520 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50530 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0924 12:11:55.786687    4520 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:11:55.788844    4520 cni.go:84] Creating CNI manager for ""
	I0924 12:11:55.788880    4520 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:11:55.788909    4520 start.go:340] cluster config:
	{Name:stopped-upgrade-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50530 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0924 12:11:55.788967    4520 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:11:55.796668    4520 out.go:177] * Starting "stopped-upgrade-164000" primary control-plane node in "stopped-upgrade-164000" cluster
	I0924 12:11:55.800693    4520 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0924 12:11:55.800708    4520 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0924 12:11:55.800716    4520 cache.go:56] Caching tarball of preloaded images
	I0924 12:11:55.800779    4520 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:11:55.800785    4520 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0924 12:11:55.800847    4520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/config.json ...
	I0924 12:11:55.801159    4520 start.go:360] acquireMachinesLock for stopped-upgrade-164000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:11:55.801190    4520 start.go:364] duration metric: took 25.833µs to acquireMachinesLock for "stopped-upgrade-164000"
	I0924 12:11:55.801200    4520 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:11:55.801205    4520 fix.go:54] fixHost starting: 
	I0924 12:11:55.801308    4520 fix.go:112] recreateIfNeeded on stopped-upgrade-164000: state=Stopped err=<nil>
	W0924 12:11:55.801316    4520 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:11:51.094519    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:11:51.095078    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:11:51.139401    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:11:51.139560    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:11:51.157237    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:11:51.157342    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:11:51.171453    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:11:51.171548    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:11:51.183333    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:11:51.183407    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:11:51.195101    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:11:51.195192    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:11:51.206453    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:11:51.206531    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:11:51.217133    4385 logs.go:276] 0 containers: []
	W0924 12:11:51.217145    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:11:51.217228    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:11:51.228118    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:11:51.228135    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:11:51.228143    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:11:51.267825    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:11:51.267838    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:11:51.281990    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:11:51.282000    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:11:51.294110    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:11:51.294121    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:11:51.332470    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:11:51.332480    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:11:51.345968    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:11:51.345984    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:11:51.361002    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:11:51.361014    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:11:51.378536    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:11:51.378552    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:11:51.392431    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:11:51.392441    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:11:51.417154    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:11:51.417164    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:11:51.439107    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:11:51.439118    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:11:51.450955    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:11:51.450966    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:11:51.465299    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:11:51.465309    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:11:51.488670    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:11:51.488677    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:11:51.492603    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:11:51.492608    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:11:51.506888    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:11:51.506898    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:11:51.519010    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:11:51.519025    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:11:54.031005    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:11:55.809701    4520 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-164000" ...
	I0924 12:11:55.813605    4520 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:11:55.813678    4520 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50495-:22,hostfwd=tcp::50496-:2376,hostname=stopped-upgrade-164000 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/disk.qcow2
	I0924 12:11:55.858272    4520 main.go:141] libmachine: STDOUT: 
	I0924 12:11:55.858295    4520 main.go:141] libmachine: STDERR: 
	I0924 12:11:55.858301    4520 main.go:141] libmachine: Waiting for VM to start (ssh -p 50495 docker@127.0.0.1)...
	I0924 12:11:59.031890    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:11:59.032192    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:11:59.054826    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:11:59.054970    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:11:59.077780    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:11:59.077870    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:11:59.089404    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:11:59.089483    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:11:59.100228    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:11:59.100300    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:11:59.111817    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:11:59.111900    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:11:59.122263    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:11:59.122343    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:11:59.133026    4385 logs.go:276] 0 containers: []
	W0924 12:11:59.133038    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:11:59.133112    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:11:59.143728    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:11:59.143750    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:11:59.143756    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:11:59.148399    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:11:59.148407    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:11:59.165922    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:11:59.165932    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:11:59.201270    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:11:59.201282    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:11:59.226347    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:11:59.226357    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:11:59.240532    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:11:59.240549    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:11:59.259217    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:11:59.259227    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:11:59.274188    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:11:59.274199    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:11:59.289893    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:11:59.289903    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:11:59.312475    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:11:59.312482    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:11:59.325803    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:11:59.325818    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:11:59.338272    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:11:59.338285    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:11:59.374368    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:11:59.374379    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:11:59.388788    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:11:59.388814    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:11:59.406015    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:11:59.406028    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:11:59.417357    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:11:59.417370    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:11:59.431257    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:11:59.431267    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:01.944286    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:06.945692    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:06.946723    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:06.959203    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:06.959302    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:06.971288    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:06.971388    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:06.983594    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:06.983690    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:06.995904    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:06.995995    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:07.008386    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:07.008479    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:07.020975    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:07.021062    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:07.034801    4385 logs.go:276] 0 containers: []
	W0924 12:12:07.034817    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:07.034896    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:07.046807    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:07.046832    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:07.046838    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:07.071318    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:07.071342    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:07.093936    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:07.093949    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:07.108575    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:07.108588    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:07.120201    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:07.120214    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:07.133310    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:07.133321    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:07.148655    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:07.148665    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:07.160395    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:07.160408    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:07.184077    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:07.184098    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:07.223004    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:07.223028    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:07.263886    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:07.263907    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:07.279881    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:07.279903    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:07.300496    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:07.300515    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:07.306114    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:07.306127    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:07.320383    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:07.320400    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:07.335183    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:07.335195    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:07.349242    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:07.349256    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:09.882250    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:14.884035    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:14.884229    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:14.898626    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:14.898718    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:14.910117    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:14.910200    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:14.920723    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:14.920807    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:14.931309    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:14.931385    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:14.941668    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:14.941752    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:14.951836    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:14.951906    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:14.961827    4385 logs.go:276] 0 containers: []
	W0924 12:12:14.961838    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:14.961913    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:14.977828    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:14.977848    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:14.977854    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:14.982092    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:14.982101    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:14.993849    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:14.993860    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:15.005165    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:15.005179    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:15.016731    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:15.016742    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:15.040225    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:15.040236    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:15.077740    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:15.077755    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:15.103217    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:15.103227    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:15.124418    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:15.124429    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:15.139426    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:15.139436    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:15.151246    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:15.151255    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:15.165127    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:15.165139    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:15.181079    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:15.181092    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:15.196616    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:15.196627    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:15.208342    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:15.208356    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:15.226687    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:15.226702    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:15.264437    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:15.264444    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:16.230822    4520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/config.json ...
	I0924 12:12:16.231649    4520 machine.go:93] provisionDockerMachine start ...
	I0924 12:12:16.231988    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.232511    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.232526    4520 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 12:12:16.330859    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 12:12:16.330910    4520 buildroot.go:166] provisioning hostname "stopped-upgrade-164000"
	I0924 12:12:16.331078    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.331349    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.331365    4520 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-164000 && echo "stopped-upgrade-164000" | sudo tee /etc/hostname
	I0924 12:12:16.421228    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-164000
	
	I0924 12:12:16.421358    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.421575    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.421589    4520 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-164000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-164000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-164000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 12:12:16.505024    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 12:12:16.505040    4520 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19700-1081/.minikube CaCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19700-1081/.minikube}
	I0924 12:12:16.505056    4520 buildroot.go:174] setting up certificates
	I0924 12:12:16.505063    4520 provision.go:84] configureAuth start
	I0924 12:12:16.505072    4520 provision.go:143] copyHostCerts
	I0924 12:12:16.505188    4520 exec_runner.go:144] found /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.pem, removing ...
	I0924 12:12:16.505198    4520 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.pem
	I0924 12:12:16.505364    4520 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.pem (1078 bytes)
	I0924 12:12:16.505616    4520 exec_runner.go:144] found /Users/jenkins/minikube-integration/19700-1081/.minikube/cert.pem, removing ...
	I0924 12:12:16.505621    4520 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19700-1081/.minikube/cert.pem
	I0924 12:12:16.505700    4520 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/cert.pem (1123 bytes)
	I0924 12:12:16.505851    4520 exec_runner.go:144] found /Users/jenkins/minikube-integration/19700-1081/.minikube/key.pem, removing ...
	I0924 12:12:16.505858    4520 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19700-1081/.minikube/key.pem
	I0924 12:12:16.505929    4520 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/key.pem (1675 bytes)
	I0924 12:12:16.506079    4520 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-164000 san=[127.0.0.1 localhost minikube stopped-upgrade-164000]
	I0924 12:12:16.600564    4520 provision.go:177] copyRemoteCerts
	I0924 12:12:16.600607    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 12:12:16.600615    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	I0924 12:12:16.638430    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 12:12:16.644727    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 12:12:16.651018    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 12:12:16.658269    4520 provision.go:87] duration metric: took 153.209167ms to configureAuth
	I0924 12:12:16.658279    4520 buildroot.go:189] setting minikube options for container-runtime
	I0924 12:12:16.658379    4520 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:12:16.658430    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.658527    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.658532    4520 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0924 12:12:16.727716    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0924 12:12:16.727725    4520 buildroot.go:70] root file system type: tmpfs
	I0924 12:12:16.727775    4520 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0924 12:12:16.727844    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.727958    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.727991    4520 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0924 12:12:16.804669    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0924 12:12:16.804734    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.804853    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.804863    4520 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0924 12:12:17.182191    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0924 12:12:17.182208    4520 machine.go:96] duration metric: took 950.628875ms to provisionDockerMachine
	I0924 12:12:17.182215    4520 start.go:293] postStartSetup for "stopped-upgrade-164000" (driver="qemu2")
	I0924 12:12:17.182222    4520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 12:12:17.182285    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 12:12:17.182297    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	I0924 12:12:17.219711    4520 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 12:12:17.221138    4520 info.go:137] Remote host: Buildroot 2021.02.12
	I0924 12:12:17.221147    4520 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19700-1081/.minikube/addons for local assets ...
	I0924 12:12:17.221232    4520 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19700-1081/.minikube/files for local assets ...
	I0924 12:12:17.221361    4520 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem -> 15982.pem in /etc/ssl/certs
	I0924 12:12:17.221490    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 12:12:17.223972    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem --> /etc/ssl/certs/15982.pem (1708 bytes)
	I0924 12:12:17.230995    4520 start.go:296] duration metric: took 48.778458ms for postStartSetup
	I0924 12:12:17.231009    4520 fix.go:56] duration metric: took 21.43352425s for fixHost
	I0924 12:12:17.231046    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:17.231153    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:17.231159    4520 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 12:12:17.301300    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727205137.661267504
	
	I0924 12:12:17.301309    4520 fix.go:216] guest clock: 1727205137.661267504
	I0924 12:12:17.301313    4520 fix.go:229] Guest: 2024-09-24 12:12:17.661267504 -0700 PDT Remote: 2024-09-24 12:12:17.231012 -0700 PDT m=+21.551961459 (delta=430.255504ms)
	I0924 12:12:17.301330    4520 fix.go:200] guest clock delta is within tolerance: 430.255504ms
	I0924 12:12:17.301337    4520 start.go:83] releasing machines lock for "stopped-upgrade-164000", held for 21.50386675s
	I0924 12:12:17.301404    4520 ssh_runner.go:195] Run: cat /version.json
	I0924 12:12:17.301413    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	I0924 12:12:17.301404    4520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 12:12:17.301443    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	W0924 12:12:17.302036    4520 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50495: connect: connection refused
	I0924 12:12:17.302053    4520 retry.go:31] will retry after 264.324012ms: dial tcp [::1]:50495: connect: connection refused
	W0924 12:12:17.337243    4520 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0924 12:12:17.337299    4520 ssh_runner.go:195] Run: systemctl --version
	I0924 12:12:17.339127    4520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 12:12:17.340801    4520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 12:12:17.340827    4520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0924 12:12:17.343499    4520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0924 12:12:17.348259    4520 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 12:12:17.348268    4520 start.go:495] detecting cgroup driver to use...
	I0924 12:12:17.348353    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 12:12:17.355789    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0924 12:12:17.359138    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0924 12:12:17.362539    4520 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0924 12:12:17.362568    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0924 12:12:17.365604    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 12:12:17.368241    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0924 12:12:17.371205    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 12:12:17.374467    4520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 12:12:17.377505    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0924 12:12:17.380190    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0924 12:12:17.383329    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0924 12:12:17.386597    4520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 12:12:17.389317    4520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 12:12:17.391786    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:17.472486    4520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0924 12:12:17.482488    4520 start.go:495] detecting cgroup driver to use...
	I0924 12:12:17.482557    4520 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0924 12:12:17.488042    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 12:12:17.492767    4520 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 12:12:17.503495    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 12:12:17.508605    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 12:12:17.513182    4520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0924 12:12:17.562785    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 12:12:17.568376    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 12:12:17.574869    4520 ssh_runner.go:195] Run: which cri-dockerd
	I0924 12:12:17.576377    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0924 12:12:17.579044    4520 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0924 12:12:17.583821    4520 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0924 12:12:17.666465    4520 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0924 12:12:17.740787    4520 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0924 12:12:17.740852    4520 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0924 12:12:17.745934    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:17.823743    4520 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0924 12:12:18.971674    4520 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.148005792s)
	I0924 12:12:18.971759    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0924 12:12:18.976550    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 12:12:18.981154    4520 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0924 12:12:19.063354    4520 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0924 12:12:19.146624    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:19.227935    4520 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0924 12:12:19.233852    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 12:12:19.238010    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:19.314534    4520 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0924 12:12:19.354508    4520 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0924 12:12:19.354604    4520 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0924 12:12:19.356771    4520 start.go:563] Will wait 60s for crictl version
	I0924 12:12:19.356824    4520 ssh_runner.go:195] Run: which crictl
	I0924 12:12:19.358712    4520 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 12:12:19.373507    4520 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0924 12:12:19.373588    4520 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0924 12:12:19.389662    4520 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0924 12:12:19.406396    4520 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0924 12:12:19.406477    4520 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0924 12:12:19.407867    4520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 12:12:19.411581    4520 kubeadm.go:883] updating cluster {Name:stopped-upgrade-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50530 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0924 12:12:19.411632    4520 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0924 12:12:19.411686    4520 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0924 12:12:19.422035    4520 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0924 12:12:19.422047    4520 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0924 12:12:19.422099    4520 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0924 12:12:19.425421    4520 ssh_runner.go:195] Run: which lz4
	I0924 12:12:19.426723    4520 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 12:12:19.428016    4520 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 12:12:19.428027    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0924 12:12:20.387123    4520 docker.go:649] duration metric: took 960.504084ms to copy over tarball
	I0924 12:12:20.387201    4520 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 12:12:17.780036    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:21.543486    4520 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156345792s)
	I0924 12:12:21.543499    4520 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 12:12:21.559324    4520 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0924 12:12:21.563291    4520 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0924 12:12:21.568788    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:21.652658    4520 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0924 12:12:23.277241    4520 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.624668084s)
	I0924 12:12:23.277361    4520 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0924 12:12:23.291024    4520 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0924 12:12:23.291033    4520 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0924 12:12:23.291038    4520 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 12:12:23.295566    4520 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:12:23.297433    4520 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:12:23.299298    4520 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:12:23.299343    4520 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0924 12:12:23.301911    4520 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:12:23.301922    4520 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:12:23.303378    4520 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:12:23.303412    4520 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0924 12:12:23.304283    4520 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:12:23.305046    4520 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:12:23.305830    4520 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:12:23.307145    4520 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:12:23.307219    4520 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:12:23.307270    4520 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:12:23.308587    4520 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:12:23.309372    4520 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:12:23.717339    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0924 12:12:23.731407    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0924 12:12:23.732988    4520 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0924 12:12:23.733008    4520 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0924 12:12:23.733048    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0924 12:12:23.745380    4520 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0924 12:12:23.745400    4520 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:12:23.745476    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0924 12:12:23.749998    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0924 12:12:23.750115    4520 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W0924 12:12:23.757718    4520 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0924 12:12:23.758010    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:12:23.760566    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0924 12:12:23.760620    4520 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0924 12:12:23.760633    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0924 12:12:23.760691    4520 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0924 12:12:23.766428    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:12:23.771753    4520 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0924 12:12:23.771774    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0924 12:12:23.774999    4520 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0924 12:12:23.775026    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0924 12:12:23.775095    4520 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0924 12:12:23.775110    4520 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:12:23.775152    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:12:23.788360    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:12:23.788651    4520 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0924 12:12:23.788670    4520 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:12:23.789166    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:12:23.789920    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:12:23.807115    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:12:23.856101    4520 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0924 12:12:23.856178    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0924 12:12:23.856293    4520 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0924 12:12:23.856304    4520 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0924 12:12:23.856312    4520 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:12:23.856362    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:12:23.856362    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0924 12:12:23.856391    4520 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0924 12:12:23.856400    4520 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0924 12:12:23.856404    4520 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:12:23.856412    4520 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:12:23.856439    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:12:23.856446    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:12:23.885025    4520 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0924 12:12:23.885063    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0924 12:12:23.885258    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0924 12:12:23.910869    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0924 12:12:23.910940    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0924 12:12:23.979884    4520 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0924 12:12:23.979897    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0924 12:12:24.099890    4520 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0924 12:12:24.133084    4520 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0924 12:12:24.133215    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:12:24.156582    4520 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0924 12:12:24.156598    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0924 12:12:24.158778    4520 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0924 12:12:24.158800    4520 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:12:24.158877    4520 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:12:24.302688    4520 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0924 12:12:24.302711    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 12:12:24.302832    4520 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 12:12:24.304294    4520 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0924 12:12:24.304309    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0924 12:12:24.334231    4520 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 12:12:24.334246    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0924 12:12:24.563903    4520 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 12:12:24.563941    4520 cache_images.go:92] duration metric: took 1.272967708s to LoadCachedImages
	W0924 12:12:24.563990    4520 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0924 12:12:24.563995    4520 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0924 12:12:24.564058    4520 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-164000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 12:12:24.564142    4520 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0924 12:12:24.580635    4520 cni.go:84] Creating CNI manager for ""
	I0924 12:12:24.580648    4520 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:12:24.580655    4520 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 12:12:24.580664    4520 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-164000 NodeName:stopped-upgrade-164000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 12:12:24.580777    4520 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-164000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 12:12:24.580842    4520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0924 12:12:24.583514    4520 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 12:12:24.583547    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 12:12:24.586524    4520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0924 12:12:24.591396    4520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 12:12:24.596401    4520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0924 12:12:24.601181    4520 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0924 12:12:24.602316    4520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 12:12:24.606345    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:24.687486    4520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 12:12:24.697182    4520 certs.go:68] Setting up /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000 for IP: 10.0.2.15
	I0924 12:12:24.697203    4520 certs.go:194] generating shared ca certs ...
	I0924 12:12:24.697212    4520 certs.go:226] acquiring lock for ca certs: {Name:mk724855f1a91a4bb17b52053043bbe8bd1cc119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:12:24.697401    4520 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.key
	I0924 12:12:24.697455    4520 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.key
	I0924 12:12:24.697466    4520 certs.go:256] generating profile certs ...
	I0924 12:12:24.697546    4520 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/client.key
	I0924 12:12:24.697564    4520 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key.c66f4644
	I0924 12:12:24.697573    4520 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt.c66f4644 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0924 12:12:24.796229    4520 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt.c66f4644 ...
	I0924 12:12:24.796242    4520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt.c66f4644: {Name:mk5e28e38bebb807ecccc0831fd829c1d304600a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:12:24.796837    4520 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key.c66f4644 ...
	I0924 12:12:24.796843    4520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key.c66f4644: {Name:mk57cd1eea0ad6d7324af174a11b28aa7e9feacd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:12:24.797008    4520 certs.go:381] copying /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt.c66f4644 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt
	I0924 12:12:24.797184    4520 certs.go:385] copying /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key.c66f4644 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key
	I0924 12:12:24.797350    4520 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/proxy-client.key
	I0924 12:12:24.797502    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/1598.pem (1338 bytes)
	W0924 12:12:24.797531    4520 certs.go:480] ignoring /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/1598_empty.pem, impossibly tiny 0 bytes
	I0924 12:12:24.797537    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 12:12:24.797564    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem (1078 bytes)
	I0924 12:12:24.797588    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem (1123 bytes)
	I0924 12:12:24.797617    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem (1675 bytes)
	I0924 12:12:24.797670    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem (1708 bytes)
	I0924 12:12:24.797983    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 12:12:24.805042    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 12:12:24.811807    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 12:12:24.818636    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 12:12:24.825997    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 12:12:24.833220    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 12:12:24.839904    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 12:12:24.846672    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 12:12:24.854057    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 12:12:24.860801    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/1598.pem --> /usr/share/ca-certificates/1598.pem (1338 bytes)
	I0924 12:12:24.867213    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem --> /usr/share/ca-certificates/15982.pem (1708 bytes)
	I0924 12:12:24.874261    4520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 12:12:24.879539    4520 ssh_runner.go:195] Run: openssl version
	I0924 12:12:24.881504    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15982.pem && ln -fs /usr/share/ca-certificates/15982.pem /etc/ssl/certs/15982.pem"
	I0924 12:12:24.884559    4520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15982.pem
	I0924 12:12:24.885875    4520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:35 /usr/share/ca-certificates/15982.pem
	I0924 12:12:24.885897    4520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15982.pem
	I0924 12:12:24.887589    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15982.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 12:12:24.890846    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 12:12:24.894280    4520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 12:12:24.895871    4520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:19 /usr/share/ca-certificates/minikubeCA.pem
	I0924 12:12:24.895905    4520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 12:12:24.897895    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 12:12:24.901114    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1598.pem && ln -fs /usr/share/ca-certificates/1598.pem /etc/ssl/certs/1598.pem"
	I0924 12:12:24.903993    4520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1598.pem
	I0924 12:12:24.905265    4520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:35 /usr/share/ca-certificates/1598.pem
	I0924 12:12:24.905291    4520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1598.pem
	I0924 12:12:24.907014    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1598.pem /etc/ssl/certs/51391683.0"
	I0924 12:12:24.910059    4520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 12:12:24.911363    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 12:12:24.913168    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 12:12:24.914962    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 12:12:24.917094    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 12:12:24.918749    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 12:12:24.920619    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 12:12:24.922271    4520 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50530 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0924 12:12:24.922347    4520 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0924 12:12:24.940746    4520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 12:12:24.943632    4520 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 12:12:24.943642    4520 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 12:12:24.943663    4520 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 12:12:24.948085    4520 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 12:12:24.948398    4520 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-164000" does not appear in /Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:12:24.948504    4520 kubeconfig.go:62] /Users/jenkins/minikube-integration/19700-1081/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-164000" cluster setting kubeconfig missing "stopped-upgrade-164000" context setting]
	I0924 12:12:24.948704    4520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/kubeconfig: {Name:mk406b8f0f5e016c0aa63af8364801bb91be8bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:12:24.949168    4520 kapi.go:59] client config for stopped-upgrade-164000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/client.key", CAFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10666e030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 12:12:24.949512    4520 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 12:12:24.952290    4520 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-164000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0924 12:12:24.952297    4520 kubeadm.go:1160] stopping kube-system containers ...
	I0924 12:12:24.952348    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0924 12:12:24.966014    4520 docker.go:483] Stopping containers: [ea28f7380559 bb8ba6d324a9 0f96fd47fd94 089d88b4ee8a 876b9146846d 3b703291d050 918f102be99c 05293699e3a3]
	I0924 12:12:24.966084    4520 ssh_runner.go:195] Run: docker stop ea28f7380559 bb8ba6d324a9 0f96fd47fd94 089d88b4ee8a 876b9146846d 3b703291d050 918f102be99c 05293699e3a3
	I0924 12:12:24.976912    4520 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 12:12:24.982474    4520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 12:12:24.985656    4520 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 12:12:24.985662    4520 kubeadm.go:157] found existing configuration files:
	
	I0924 12:12:24.985688    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/admin.conf
	I0924 12:12:24.988289    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 12:12:24.988315    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 12:12:24.990932    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/kubelet.conf
	I0924 12:12:24.993962    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 12:12:24.993987    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 12:12:24.996655    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/controller-manager.conf
	I0924 12:12:24.999106    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 12:12:24.999130    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 12:12:25.002354    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/scheduler.conf
	I0924 12:12:25.005815    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 12:12:25.005867    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 12:12:25.009109    4520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 12:12:25.012160    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:12:25.034655    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:12:25.370575    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:12:25.494169    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:12:25.528438    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:12:25.552203    4520 api_server.go:52] waiting for apiserver process to appear ...
	I0924 12:12:25.552284    4520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:12:22.781857    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:22.781980    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:22.793599    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:22.793710    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:22.804766    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:22.804870    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:22.821970    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:22.822047    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:22.833572    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:22.833655    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:22.844209    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:22.844286    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:22.855595    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:22.855679    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:22.867180    4385 logs.go:276] 0 containers: []
	W0924 12:12:22.867194    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:22.867263    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:22.878320    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:22.878339    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:22.878345    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:22.894473    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:22.894485    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:22.906510    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:22.906522    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:22.936877    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:22.936894    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:22.956217    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:22.956235    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:22.978794    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:22.978808    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:23.007886    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:23.007906    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:23.021110    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:23.021126    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:23.028531    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:23.028548    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:23.042955    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:23.042968    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:23.054977    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:23.054993    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:23.073490    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:23.073503    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:23.089188    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:23.089199    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:23.127280    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:23.127292    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:23.166887    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:23.166899    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:23.182121    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:23.182134    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:23.200475    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:23.200488    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:25.716841    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:26.054596    4520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:12:26.554311    4520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:12:26.560406    4520 api_server.go:72] duration metric: took 1.008253042s to wait for apiserver process to appear ...
	I0924 12:12:26.560417    4520 api_server.go:88] waiting for apiserver healthz status ...
	I0924 12:12:26.560429    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:30.718476    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:30.718606    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:30.731855    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:30.731945    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:30.742940    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:30.743028    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:30.754083    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:30.754167    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:30.765044    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:30.765127    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:30.775477    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:30.775560    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:30.786921    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:30.787008    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:30.797683    4385 logs.go:276] 0 containers: []
	W0924 12:12:30.797695    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:30.797767    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:31.562283    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:31.562319    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:30.808620    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:30.808638    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:30.808644    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:30.844587    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:30.844597    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:30.848751    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:30.848761    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:30.884416    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:30.884427    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:30.898774    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:30.898783    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:30.912559    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:30.912575    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:30.936252    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:30.936264    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:30.953875    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:30.953891    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:30.965678    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:30.965689    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:30.987887    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:30.987896    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:30.999200    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:30.999214    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:31.024201    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:31.024214    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:31.035898    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:31.035909    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:31.054045    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:31.054055    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:31.069816    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:31.069827    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:31.081212    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:31.081223    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:31.095174    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:31.095185    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:33.608910    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:36.562400    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:36.562467    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:38.609053    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:38.609275    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:38.623106    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:38.623208    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:38.634354    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:38.634451    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:38.647999    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:38.648077    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:38.658868    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:38.658965    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:38.669953    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:38.670030    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:38.682694    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:38.682770    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:38.692592    4385 logs.go:276] 0 containers: []
	W0924 12:12:38.692603    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:38.692668    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:38.703256    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:38.703276    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:38.703280    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:38.740457    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:38.740467    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:38.755245    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:38.755259    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:38.770700    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:38.770711    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:38.783527    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:38.783540    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:38.795245    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:38.795257    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:38.807216    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:38.807228    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:38.818790    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:38.818800    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:38.830643    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:38.830659    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:38.835529    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:38.835535    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:38.869502    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:38.869513    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:38.884150    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:38.884180    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:38.909334    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:38.909345    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:38.923598    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:38.923610    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:38.935488    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:38.935499    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:38.961020    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:38.961033    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:38.974574    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:38.974588    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:41.562750    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:41.562775    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:41.500025    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:46.563504    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:46.563522    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:46.502232    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:46.502503    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:46.522776    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:46.522897    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:46.536861    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:46.536962    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:46.549295    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:46.549372    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:46.559717    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:46.559809    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:46.570386    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:46.570470    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:46.580783    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:46.580855    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:46.594198    4385 logs.go:276] 0 containers: []
	W0924 12:12:46.594214    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:46.594290    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:46.604874    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:46.604893    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:46.604899    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:46.642004    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:46.642012    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:46.646346    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:46.646354    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:46.657721    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:46.657732    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:46.669575    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:46.669587    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:46.681020    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:46.681030    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:46.702914    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:46.702923    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:46.728275    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:46.728286    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:46.751014    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:46.751029    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:46.762449    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:46.762459    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:46.779812    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:46.779823    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:46.817984    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:46.818000    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:46.832147    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:46.832164    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:46.843405    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:46.843417    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:46.858061    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:46.858072    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:46.870428    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:46.870437    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:46.884836    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:46.884846    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:49.397335    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:51.564062    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:51.564109    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:54.399655    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:54.399987    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:12:54.426534    4385 logs.go:276] 2 containers: [347277fe6dd8 9130f5815031]
	I0924 12:12:54.426678    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:12:54.443604    4385 logs.go:276] 2 containers: [4b9b021bfbf3 9c4f3996e841]
	I0924 12:12:54.443718    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:12:54.458612    4385 logs.go:276] 1 containers: [0d0d2e269a9a]
	I0924 12:12:54.458704    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:12:54.470123    4385 logs.go:276] 2 containers: [8ad3aac12145 4aa76c361b77]
	I0924 12:12:54.470207    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:12:54.481424    4385 logs.go:276] 1 containers: [88ee0060b583]
	I0924 12:12:54.481506    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:12:54.492077    4385 logs.go:276] 2 containers: [446ad0eee2a3 39ef84c00e75]
	I0924 12:12:54.492154    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:12:54.502676    4385 logs.go:276] 0 containers: []
	W0924 12:12:54.502692    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:12:54.502754    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:12:54.513271    4385 logs.go:276] 2 containers: [18c96e401687 7350440a30d5]
	I0924 12:12:54.513291    4385 logs.go:123] Gathering logs for storage-provisioner [18c96e401687] ...
	I0924 12:12:54.513296    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c96e401687"
	I0924 12:12:54.524864    4385 logs.go:123] Gathering logs for kube-proxy [88ee0060b583] ...
	I0924 12:12:54.524876    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ee0060b583"
	I0924 12:12:54.537391    4385 logs.go:123] Gathering logs for kube-controller-manager [446ad0eee2a3] ...
	I0924 12:12:54.537401    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 446ad0eee2a3"
	I0924 12:12:54.555651    4385 logs.go:123] Gathering logs for kube-controller-manager [39ef84c00e75] ...
	I0924 12:12:54.555661    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ef84c00e75"
	I0924 12:12:54.567881    4385 logs.go:123] Gathering logs for kube-scheduler [8ad3aac12145] ...
	I0924 12:12:54.567893    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ad3aac12145"
	I0924 12:12:54.579626    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:12:54.579637    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:12:54.603032    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:12:54.603041    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:12:54.641263    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:12:54.641274    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:12:54.645842    4385 logs.go:123] Gathering logs for etcd [4b9b021bfbf3] ...
	I0924 12:12:54.645851    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b9b021bfbf3"
	I0924 12:12:54.659830    4385 logs.go:123] Gathering logs for storage-provisioner [7350440a30d5] ...
	I0924 12:12:54.659841    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7350440a30d5"
	I0924 12:12:54.671349    4385 logs.go:123] Gathering logs for kube-apiserver [347277fe6dd8] ...
	I0924 12:12:54.671360    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 347277fe6dd8"
	I0924 12:12:54.684948    4385 logs.go:123] Gathering logs for kube-apiserver [9130f5815031] ...
	I0924 12:12:54.684959    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f5815031"
	I0924 12:12:54.712580    4385 logs.go:123] Gathering logs for kube-scheduler [4aa76c361b77] ...
	I0924 12:12:54.712591    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4aa76c361b77"
	I0924 12:12:54.727346    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:12:54.727356    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:12:54.739284    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:12:54.739293    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:12:54.778218    4385 logs.go:123] Gathering logs for etcd [9c4f3996e841] ...
	I0924 12:12:54.778229    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c4f3996e841"
	I0924 12:12:54.792905    4385 logs.go:123] Gathering logs for coredns [0d0d2e269a9a] ...
	I0924 12:12:54.792916    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d0d2e269a9a"
	I0924 12:12:56.565030    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:56.565102    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:57.305882    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:02.308184    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:02.308254    4385 kubeadm.go:597] duration metric: took 4m5.106411458s to restartPrimaryControlPlane
	W0924 12:13:02.308302    4385 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 12:13:02.308324    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0924 12:13:03.310945    4385 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.002576708s)
	I0924 12:13:03.311005    4385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 12:13:03.316345    4385 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 12:13:03.319404    4385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 12:13:03.322469    4385 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 12:13:03.322478    4385 kubeadm.go:157] found existing configuration files:
	
	I0924 12:13:03.322511    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf
	I0924 12:13:03.325320    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 12:13:03.325347    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 12:13:03.328059    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf
	I0924 12:13:03.330887    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 12:13:03.330918    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 12:13:03.333985    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf
	I0924 12:13:03.336525    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 12:13:03.336554    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 12:13:03.339317    4385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf
	I0924 12:13:03.342336    4385 kubeadm.go:163] "https://control-plane.minikube.internal:50285" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50285 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 12:13:03.342364    4385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 12:13:03.345278    4385 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 12:13:03.362703    4385 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0924 12:13:03.362762    4385 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 12:13:03.415149    4385 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 12:13:03.415206    4385 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 12:13:03.415276    4385 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 12:13:03.463702    4385 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 12:13:03.467843    4385 out.go:235]   - Generating certificates and keys ...
	I0924 12:13:03.467874    4385 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 12:13:03.467911    4385 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 12:13:03.467960    4385 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 12:13:03.467996    4385 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 12:13:03.468039    4385 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 12:13:03.468067    4385 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 12:13:03.468099    4385 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 12:13:03.468132    4385 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 12:13:03.468178    4385 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 12:13:03.468218    4385 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 12:13:03.468238    4385 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 12:13:03.468273    4385 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 12:13:03.597373    4385 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 12:13:03.791902    4385 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 12:13:03.849005    4385 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 12:13:03.902909    4385 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 12:13:03.932475    4385 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 12:13:03.933071    4385 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 12:13:03.933095    4385 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 12:13:04.023785    4385 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 12:13:01.566323    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:01.566363    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:04.027021    4385 out.go:235]   - Booting up control plane ...
	I0924 12:13:04.027069    4385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 12:13:04.027107    4385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 12:13:04.027146    4385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 12:13:04.027194    4385 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 12:13:04.027272    4385 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 12:13:08.528646    4385 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504236 seconds
	I0924 12:13:08.528786    4385 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 12:13:08.534052    4385 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 12:13:09.047772    4385 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 12:13:09.048021    4385 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-070000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 12:13:09.551682    4385 kubeadm.go:310] [bootstrap-token] Using token: ow9nvg.bt83dtd7nvqad9oo
	I0924 12:13:09.556886    4385 out.go:235]   - Configuring RBAC rules ...
	I0924 12:13:09.556956    4385 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 12:13:09.557006    4385 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 12:13:09.562942    4385 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 12:13:09.563722    4385 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 12:13:09.564493    4385 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 12:13:09.565399    4385 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 12:13:09.568586    4385 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 12:13:09.732064    4385 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 12:13:09.955723    4385 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 12:13:09.957923    4385 kubeadm.go:310] 
	I0924 12:13:09.957961    4385 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 12:13:09.957966    4385 kubeadm.go:310] 
	I0924 12:13:09.958004    4385 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 12:13:09.958009    4385 kubeadm.go:310] 
	I0924 12:13:09.958021    4385 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 12:13:09.958053    4385 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 12:13:09.958089    4385 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 12:13:09.958094    4385 kubeadm.go:310] 
	I0924 12:13:09.958120    4385 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 12:13:09.958145    4385 kubeadm.go:310] 
	I0924 12:13:09.958241    4385 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 12:13:09.958247    4385 kubeadm.go:310] 
	I0924 12:13:09.958284    4385 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 12:13:09.958332    4385 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 12:13:09.958378    4385 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 12:13:09.958383    4385 kubeadm.go:310] 
	I0924 12:13:09.958451    4385 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 12:13:09.958504    4385 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 12:13:09.958509    4385 kubeadm.go:310] 
	I0924 12:13:09.958548    4385 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ow9nvg.bt83dtd7nvqad9oo \
	I0924 12:13:09.958605    4385 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4250e15ce19ea6ee8d936fb77d1a59ad22f9367fb00a8a9aa9e1b7fb7d1933b3 \
	I0924 12:13:09.958623    4385 kubeadm.go:310] 	--control-plane 
	I0924 12:13:09.958627    4385 kubeadm.go:310] 
	I0924 12:13:09.958670    4385 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 12:13:09.958674    4385 kubeadm.go:310] 
	I0924 12:13:09.958733    4385 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ow9nvg.bt83dtd7nvqad9oo \
	I0924 12:13:09.958791    4385 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4250e15ce19ea6ee8d936fb77d1a59ad22f9367fb00a8a9aa9e1b7fb7d1933b3 
	I0924 12:13:09.958854    4385 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 12:13:09.958864    4385 cni.go:84] Creating CNI manager for ""
	I0924 12:13:09.958871    4385 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:13:09.963048    4385 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 12:13:09.969971    4385 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 12:13:09.972971    4385 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 12:13:09.977683    4385 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 12:13:09.977742    4385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 12:13:09.977836    4385 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-070000 minikube.k8s.io/updated_at=2024_09_24T12_13_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=running-upgrade-070000 minikube.k8s.io/primary=true
	I0924 12:13:09.981456    4385 ops.go:34] apiserver oom_adj: -16
	I0924 12:13:10.020190    4385 kubeadm.go:1113] duration metric: took 42.486875ms to wait for elevateKubeSystemPrivileges
	I0924 12:13:10.020263    4385 kubeadm.go:394] duration metric: took 4m12.833217709s to StartCluster
	I0924 12:13:10.020276    4385 settings.go:142] acquiring lock: {Name:mk8f5a1e4973fb47308ad8c9735bcc716ada1e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:13:10.020365    4385 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:13:10.020784    4385 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/kubeconfig: {Name:mk406b8f0f5e016c0aa63af8364801bb91be8bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:13:10.020995    4385 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:13:10.021000    4385 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 12:13:10.021033    4385 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-070000"
	I0924 12:13:10.021041    4385 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-070000"
	W0924 12:13:10.021044    4385 addons.go:243] addon storage-provisioner should already be in state true
	I0924 12:13:10.021061    4385 host.go:66] Checking if "running-upgrade-070000" exists ...
	I0924 12:13:10.021088    4385 config.go:182] Loaded profile config "running-upgrade-070000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:13:10.021073    4385 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-070000"
	I0924 12:13:10.021105    4385 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-070000"
	I0924 12:13:10.021917    4385 kapi.go:59] client config for running-upgrade-070000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/running-upgrade-070000/client.key", CAFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10420a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 12:13:10.022035    4385 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-070000"
	W0924 12:13:10.022040    4385 addons.go:243] addon default-storageclass should already be in state true
	I0924 12:13:10.022047    4385 host.go:66] Checking if "running-upgrade-070000" exists ...
	I0924 12:13:10.023998    4385 out.go:177] * Verifying Kubernetes components...
	I0924 12:13:10.024362    4385 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 12:13:10.028375    4385 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 12:13:10.028382    4385 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/running-upgrade-070000/id_rsa Username:docker}
	I0924 12:13:10.031956    4385 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:13:06.567767    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:06.567793    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:10.036022    4385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:13:10.040072    4385 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 12:13:10.040079    4385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 12:13:10.040085    4385 sshutil.go:53] new ssh client: &{IP:localhost Port:50253 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/running-upgrade-070000/id_rsa Username:docker}
	I0924 12:13:10.126550    4385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 12:13:10.131236    4385 api_server.go:52] waiting for apiserver process to appear ...
	I0924 12:13:10.131280    4385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:13:10.135474    4385 api_server.go:72] duration metric: took 114.4705ms to wait for apiserver process to appear ...
	I0924 12:13:10.135480    4385 api_server.go:88] waiting for apiserver healthz status ...
	I0924 12:13:10.135488    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:10.145595    4385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 12:13:10.204192    4385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 12:13:10.472741    4385 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 12:13:10.472753    4385 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 12:13:11.569575    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:11.569620    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:15.137545    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:15.137608    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:16.571914    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:16.571955    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:20.137960    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:20.138008    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:21.574139    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:21.574178    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:25.138358    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:25.138397    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:26.576462    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:26.576623    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:13:26.589941    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:13:26.590031    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:13:26.601501    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:13:26.601589    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:13:26.611834    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:13:26.611924    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:13:26.622382    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:13:26.622474    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:13:26.632735    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:13:26.632818    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:13:26.645579    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:13:26.645667    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:13:26.656413    4520 logs.go:276] 0 containers: []
	W0924 12:13:26.656426    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:13:26.656499    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:13:26.667797    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:13:26.667816    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:13:26.667820    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:13:26.680239    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:13:26.680254    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:13:26.705730    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:13:26.705747    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:13:26.719835    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:13:26.719845    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:13:26.731033    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:13:26.731044    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:13:26.813786    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:13:26.813800    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:13:26.854445    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:13:26.854461    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:13:26.868913    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:13:26.868928    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:13:26.880976    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:13:26.880987    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:13:26.898658    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:13:26.898669    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:13:26.912239    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:13:26.912250    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:13:26.923988    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:13:26.923999    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:13:26.928672    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:13:26.928681    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:13:26.943703    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:13:26.943712    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:13:26.959579    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:13:26.959590    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:13:26.971706    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:13:26.971722    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:13:26.997458    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:13:26.997468    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:13:29.538261    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:30.138892    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:30.138937    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:34.540664    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:34.540909    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:13:34.567073    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:13:34.567251    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:13:34.586995    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:13:34.587091    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:13:34.598301    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:13:34.598397    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:13:34.609060    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:13:34.609152    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:13:34.619261    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:13:34.619333    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:13:34.630146    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:13:34.630226    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:13:34.640676    4520 logs.go:276] 0 containers: []
	W0924 12:13:34.640688    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:13:34.640759    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:13:34.652195    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:13:34.652216    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:13:34.652222    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:13:34.664207    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:13:34.664218    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:13:34.669505    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:13:34.669513    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:13:34.710227    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:13:34.710242    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:13:34.722357    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:13:34.722366    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:13:34.745961    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:13:34.745969    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:13:34.759729    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:13:34.759739    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:13:34.775107    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:13:34.775123    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:13:34.787004    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:13:34.787014    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:13:34.802309    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:13:34.802319    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:13:34.814654    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:13:34.814669    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:13:34.832121    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:13:34.832136    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:13:34.850881    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:13:34.850892    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:13:34.889588    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:13:34.889602    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:13:34.903436    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:13:34.903451    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:13:34.914770    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:13:34.914784    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:13:34.951523    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:13:34.951541    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:13:35.139563    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:35.139622    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:40.140449    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:40.140472    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0924 12:13:40.474957    4385 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0924 12:13:40.479263    4385 out.go:177] * Enabled addons: storage-provisioner
	I0924 12:13:37.469561    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:40.487139    4385 addons.go:510] duration metric: took 30.466380125s for enable addons: enabled=[storage-provisioner]
	I0924 12:13:42.471933    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:42.472239    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:13:42.495104    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:13:42.495231    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:13:42.512856    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:13:42.512948    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:13:42.525617    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:13:42.525702    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:13:42.536454    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:13:42.536530    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:13:42.546964    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:13:42.547045    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:13:42.557256    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:13:42.557332    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:13:42.567612    4520 logs.go:276] 0 containers: []
	W0924 12:13:42.567625    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:13:42.567688    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:13:42.578032    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:13:42.578053    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:13:42.578058    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:13:42.623055    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:13:42.623070    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:13:42.637371    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:13:42.637383    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:13:42.677234    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:13:42.677246    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:13:42.691997    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:13:42.692009    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:13:42.730383    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:13:42.730394    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:13:42.741615    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:13:42.741627    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:13:42.756551    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:13:42.756568    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:13:42.781198    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:13:42.781205    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:13:42.797467    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:13:42.797480    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:13:42.809227    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:13:42.809241    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:13:42.813214    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:13:42.813220    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:13:42.827461    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:13:42.827476    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:13:42.842294    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:13:42.842308    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:13:42.854451    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:13:42.854465    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:13:42.867041    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:13:42.867056    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:13:42.885083    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:13:42.885092    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:13:45.399630    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:45.141465    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:45.141510    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:50.402052    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:50.402249    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:13:50.426580    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:13:50.426680    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:13:50.439623    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:13:50.439715    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:13:50.450637    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:13:50.450720    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:13:50.468695    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:13:50.468789    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:13:50.479240    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:13:50.479324    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:13:50.489876    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:13:50.489968    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:13:50.500422    4520 logs.go:276] 0 containers: []
	W0924 12:13:50.500437    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:13:50.500513    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:13:50.511472    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:13:50.511492    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:13:50.511497    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:13:50.522582    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:13:50.522595    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:13:50.540872    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:13:50.540881    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:13:50.544939    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:13:50.544944    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:13:50.558605    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:13:50.558619    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:13:50.569704    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:13:50.569717    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:13:50.593523    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:13:50.593530    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:13:50.627779    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:13:50.627789    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:13:50.645618    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:13:50.645628    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:13:50.663663    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:13:50.663676    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:13:50.685027    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:13:50.685038    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:13:50.699037    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:13:50.699053    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:13:50.142951    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:50.143017    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:50.713795    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:13:50.713810    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:13:50.725217    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:13:50.725232    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:13:50.761921    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:13:50.761930    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:13:50.798824    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:13:50.798838    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:13:50.813887    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:13:50.813900    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:13:53.335390    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:55.144714    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:55.144757    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:58.337585    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:58.337776    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:13:58.352737    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:13:58.352835    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:13:58.365167    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:13:58.365251    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:13:58.377369    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:13:58.377447    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:13:58.387606    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:13:58.387700    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:13:58.398212    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:13:58.398291    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:13:58.408918    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:13:58.408997    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:13:58.419115    4520 logs.go:276] 0 containers: []
	W0924 12:13:58.419127    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:13:58.419202    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:13:58.429500    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:13:58.429519    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:13:58.429524    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:13:58.446681    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:13:58.446697    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:13:58.457732    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:13:58.457741    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:13:58.472282    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:13:58.472292    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:13:58.510493    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:13:58.510510    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:13:58.524524    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:13:58.524533    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:13:58.538560    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:13:58.538574    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:13:58.564191    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:13:58.564199    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:13:58.575333    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:13:58.575348    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:13:58.590162    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:13:58.590175    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:13:58.601736    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:13:58.601752    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:13:58.637702    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:13:58.637714    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:13:58.650339    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:13:58.650351    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:13:58.662187    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:13:58.662199    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:13:58.673351    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:13:58.673367    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:13:58.713004    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:13:58.713015    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:13:58.717270    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:13:58.717276    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:00.146856    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:00.146881    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:01.239794    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:05.147994    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:05.148021    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:06.242018    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:06.242139    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:06.256104    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:06.256201    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:06.266788    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:06.266862    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:06.277257    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:06.277327    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:06.287558    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:06.287638    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:06.297984    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:06.298061    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:06.308516    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:06.308594    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:06.322974    4520 logs.go:276] 0 containers: []
	W0924 12:14:06.322985    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:06.323057    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:06.333458    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:06.333478    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:06.333483    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:06.346891    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:06.346901    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:06.360745    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:06.360756    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:06.374346    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:06.374358    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:06.388949    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:06.388960    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:06.393003    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:06.393013    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:06.411462    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:06.411472    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:06.423093    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:06.423103    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:06.445914    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:06.445921    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:06.459620    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:06.459631    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:06.497305    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:06.497317    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:06.508835    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:06.508849    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:06.530965    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:06.530976    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:06.542520    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:06.542531    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:06.554070    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:06.554082    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:06.593038    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:06.593047    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:06.627440    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:06.627449    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:09.141207    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:10.150192    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:10.150331    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:10.161194    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:10.161276    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:10.171294    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:10.171379    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:10.189919    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:10.190003    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:10.200204    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:10.200289    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:10.210857    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:10.210944    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:10.221089    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:10.221163    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:10.231036    4385 logs.go:276] 0 containers: []
	W0924 12:14:10.231047    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:10.231119    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:10.241271    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:10.241287    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:10.241293    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:10.276864    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:10.276874    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:10.292230    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:10.292240    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:10.304260    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:10.304270    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:10.319381    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:10.319394    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:10.334965    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:10.334979    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:10.358551    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:10.358558    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:10.393732    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:10.393739    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:10.398010    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:10.398019    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:10.412070    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:10.412081    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:10.423911    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:10.423923    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:10.435303    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:10.435313    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:10.453355    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:10.453366    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:14.143516    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:14.143663    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:14.155843    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:14.155940    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:14.169312    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:14.169396    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:14.179729    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:14.179810    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:14.190587    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:14.190676    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:14.201596    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:14.201680    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:14.212457    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:14.212544    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:14.223731    4520 logs.go:276] 0 containers: []
	W0924 12:14:14.223741    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:14.223812    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:14.234295    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:14.234311    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:14.234317    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:14.273086    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:14.273097    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:14.299964    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:14.299976    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:14.313084    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:14.313094    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:14.324212    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:14.324222    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:14.336028    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:14.336040    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:14.352330    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:14.352341    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:14.363751    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:14.363762    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:14.375242    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:14.375252    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:14.411984    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:14.411994    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:14.425745    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:14.425755    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:14.443825    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:14.443838    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:14.455830    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:14.455844    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:14.467549    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:14.467560    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:14.471613    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:14.471619    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:14.505368    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:14.505382    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:14.526231    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:14.526241    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:12.967049    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:17.053103    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:17.968510    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:17.968706    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:17.983626    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:17.983717    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:17.995927    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:17.995997    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:18.006817    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:18.006912    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:18.016908    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:18.016987    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:18.027385    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:18.027472    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:18.037988    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:18.038069    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:18.048039    4385 logs.go:276] 0 containers: []
	W0924 12:14:18.048057    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:18.048134    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:18.058428    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:18.058442    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:18.058447    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:18.093182    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:18.093190    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:18.128764    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:18.128779    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:18.143045    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:18.143058    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:18.160833    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:18.160848    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:18.174534    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:18.174550    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:18.195909    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:18.195922    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:18.200512    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:18.200519    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:18.212247    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:18.212260    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:18.232880    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:18.232893    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:18.244767    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:18.244779    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:18.256333    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:18.256348    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:18.281854    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:18.281866    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:20.795698    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:22.055431    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:22.055580    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:22.069844    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:22.069942    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:22.081021    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:22.081107    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:22.096634    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:22.096717    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:22.107912    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:22.108003    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:22.118479    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:22.118559    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:22.128667    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:22.128752    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:22.138934    4520 logs.go:276] 0 containers: []
	W0924 12:14:22.138945    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:22.139016    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:22.148990    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:22.149009    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:22.149014    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:22.186391    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:22.186405    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:22.202483    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:22.202497    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:22.218316    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:22.218331    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:22.229997    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:22.230009    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:22.234549    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:22.234557    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:22.273709    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:22.273725    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:22.288418    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:22.288431    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:22.300313    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:22.300328    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:22.315306    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:22.315320    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:22.327276    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:22.327290    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:22.338853    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:22.338867    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:22.362212    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:22.362221    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:22.399477    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:22.399486    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:22.410684    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:22.410696    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:22.428331    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:22.428342    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:22.442241    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:22.442251    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:24.955719    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:25.798024    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:25.798188    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:29.957922    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:29.958087    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:29.971817    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:29.971909    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:29.984534    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:29.984618    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:29.995226    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:29.995320    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:30.005761    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:30.005845    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:30.016148    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:30.016234    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:30.027559    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:30.027643    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:30.037545    4520 logs.go:276] 0 containers: []
	W0924 12:14:30.037560    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:30.037636    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:30.048022    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:30.048040    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:30.048046    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:30.062337    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:30.062348    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:30.073515    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:30.073526    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:30.084990    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:30.084999    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:30.104969    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:30.104980    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:30.116266    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:30.116278    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:30.154782    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:30.154793    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:30.159035    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:30.159041    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:30.193490    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:30.193507    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:30.231465    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:30.231475    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:30.243745    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:30.243761    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:30.265309    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:30.265326    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:30.277084    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:30.277098    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:30.291229    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:30.291239    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:30.305081    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:30.305096    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:30.318299    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:30.318314    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:30.329600    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:30.329611    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:25.812497    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:25.812594    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:25.824703    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:25.824788    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:25.835843    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:25.835931    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:25.846591    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:25.846666    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:25.857289    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:25.857369    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:25.868378    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:25.868455    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:25.878277    4385 logs.go:276] 0 containers: []
	W0924 12:14:25.878289    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:25.878356    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:25.888460    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:25.888478    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:25.888484    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:25.904005    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:25.904016    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:25.920438    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:25.920451    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:25.939122    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:25.939132    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:25.951162    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:25.951176    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:25.966268    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:25.966280    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:25.971229    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:25.971236    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:25.985779    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:25.985790    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:26.002896    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:26.002909    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:26.014140    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:26.014153    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:26.026197    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:26.026208    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:26.050206    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:26.050227    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:26.084560    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:26.084574    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:28.622767    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:32.855527    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:33.625208    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:33.625674    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:33.665999    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:33.666164    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:33.688931    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:33.689077    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:33.704423    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:33.704523    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:33.717563    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:33.717654    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:33.728197    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:33.728281    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:33.738701    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:33.738782    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:33.748889    4385 logs.go:276] 0 containers: []
	W0924 12:14:33.748903    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:33.748975    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:33.759326    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:33.759344    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:33.759350    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:33.773789    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:33.773803    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:33.785024    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:33.785039    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:33.796744    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:33.796755    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:33.810874    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:33.810885    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:33.828241    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:33.828253    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:33.853062    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:33.853072    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:33.886465    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:33.886474    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:33.923033    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:33.923047    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:33.935092    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:33.935105    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:33.949963    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:33.949977    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:33.963710    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:33.963721    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:33.968727    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:33.968734    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:37.857232    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:37.857420    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:37.872165    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:37.872271    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:37.884017    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:37.884106    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:37.895034    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:37.895125    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:37.911882    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:37.911976    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:37.922982    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:37.923071    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:37.934582    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:37.934664    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:37.945146    4520 logs.go:276] 0 containers: []
	W0924 12:14:37.945157    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:37.945232    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:37.968050    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:37.968068    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:37.968073    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:37.979908    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:37.979924    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:37.991631    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:37.991646    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:38.004888    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:38.004902    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:38.017674    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:38.017685    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:38.057485    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:38.057495    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:38.071484    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:38.071494    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:38.085551    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:38.085566    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:38.097139    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:38.097151    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:38.101277    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:38.101287    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:38.137296    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:38.137310    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:38.154687    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:38.154698    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:38.166223    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:38.166235    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:38.181594    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:38.181604    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:38.200096    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:38.200106    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:38.224473    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:38.224485    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:38.238941    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:38.238955    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:36.484755    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:40.781275    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:41.487113    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:41.487622    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:41.524164    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:41.524325    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:41.546153    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:41.546287    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:41.560640    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:41.560730    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:41.572409    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:41.572483    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:41.583090    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:41.583162    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:41.594401    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:41.594485    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:41.605140    4385 logs.go:276] 0 containers: []
	W0924 12:14:41.605151    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:41.605225    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:41.615415    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:41.615429    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:41.615435    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:41.633154    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:41.633166    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:41.657957    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:41.657968    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:41.669569    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:41.669582    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:41.705029    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:41.705043    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:41.719317    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:41.719327    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:41.734041    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:41.734056    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:41.746191    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:41.746201    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:41.760364    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:41.760375    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:41.794837    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:41.794844    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:41.799506    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:41.799513    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:41.811107    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:41.811119    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:41.822702    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:41.822712    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:44.339838    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:45.783669    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:45.784120    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:45.816861    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:45.817035    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:45.836468    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:45.836584    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:45.851584    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:45.851678    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:45.864415    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:45.864507    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:45.874989    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:45.875079    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:45.886152    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:45.886232    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:45.896788    4520 logs.go:276] 0 containers: []
	W0924 12:14:45.896800    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:45.896864    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:45.907989    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:45.908008    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:45.908014    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:45.924091    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:45.924104    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:45.936174    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:45.936191    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:45.950350    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:45.950364    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:45.962545    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:45.962556    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:45.967185    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:45.967195    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:45.981992    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:45.982005    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:45.994043    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:45.994055    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:46.006090    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:46.006102    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:46.018498    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:46.018509    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:46.057790    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:46.057804    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:46.072021    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:46.072032    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:46.089394    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:46.089407    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:46.104104    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:46.104115    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:46.141454    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:46.141468    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:46.155417    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:46.155431    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:46.179336    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:46.179343    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:48.715487    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:49.342100    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:49.342349    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:49.369787    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:49.369889    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:49.383384    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:49.383475    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:49.394712    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:49.394801    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:49.405074    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:49.405161    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:49.415887    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:49.415963    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:49.426388    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:49.426476    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:49.436174    4385 logs.go:276] 0 containers: []
	W0924 12:14:49.436187    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:49.436254    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:49.446410    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:49.446425    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:49.446430    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:49.459206    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:49.459216    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:49.493190    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:49.493198    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:49.497487    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:49.497495    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:49.509304    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:49.509314    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:49.523977    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:49.523988    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:49.541446    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:49.541460    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:49.566027    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:49.566038    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:49.602201    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:49.602215    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:49.616768    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:49.616779    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:49.630480    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:49.630494    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:49.642125    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:49.642137    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:49.654050    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:49.654062    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:53.717948    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:53.718382    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:53.748113    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:53.748267    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:53.766542    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:53.766656    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:53.780958    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:53.781042    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:53.793006    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:53.793102    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:53.805696    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:53.805769    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:53.816637    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:53.816725    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:53.826802    4520 logs.go:276] 0 containers: []
	W0924 12:14:53.826814    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:53.826886    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:53.842095    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:53.842114    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:53.842120    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:53.856215    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:53.856226    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:53.871401    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:53.871416    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:53.910341    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:53.910350    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:53.946092    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:53.946107    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:53.959725    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:53.959741    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:53.971536    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:53.971547    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:54.010058    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:54.010070    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:54.022013    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:54.022028    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:54.033199    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:54.033211    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:54.048085    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:54.048098    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:54.059765    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:54.059781    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:54.070813    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:54.070828    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:54.075022    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:54.075028    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:54.088853    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:54.088863    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:54.113513    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:54.113526    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:54.131099    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:54.131110    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:52.166282    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:56.646651    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:57.168558    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:57.168822    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:57.187484    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:14:57.187592    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:57.201702    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:14:57.201784    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:57.213177    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:14:57.213268    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:57.223722    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:14:57.223801    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:57.233926    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:14:57.234011    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:57.244748    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:14:57.244837    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:57.254527    4385 logs.go:276] 0 containers: []
	W0924 12:14:57.254540    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:57.254611    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:57.264770    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:14:57.264785    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:57.264791    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:57.299395    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:57.299404    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:57.337420    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:14:57.337430    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:14:57.351701    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:14:57.351712    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:14:57.365923    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:14:57.365934    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:14:57.377052    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:14:57.377063    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:14:57.389342    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:14:57.389357    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:14:57.404422    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:14:57.404433    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:14:57.416753    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:14:57.416768    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:14:57.428788    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:57.428799    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:57.453299    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:57.453309    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:57.457503    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:14:57.457512    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:14:57.474127    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:14:57.474139    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:59.987937    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:01.649238    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:01.649687    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:01.690230    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:01.690400    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:01.711386    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:01.711525    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:01.726807    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:01.726904    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:01.739382    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:01.739473    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:01.751493    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:01.751579    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:01.762584    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:01.762660    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:01.779748    4520 logs.go:276] 0 containers: []
	W0924 12:15:01.779760    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:01.779834    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:01.790333    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:01.790349    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:01.790355    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:01.829593    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:01.829607    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:01.841770    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:01.841782    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:01.853448    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:01.853458    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:01.864582    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:01.864594    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:01.868759    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:01.868768    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:01.883980    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:01.883990    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:01.899375    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:01.899386    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:01.920456    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:01.920468    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:01.945958    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:01.945969    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:01.957970    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:01.957981    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:01.996990    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:01.997008    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:02.033279    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:02.033290    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:02.048899    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:02.048912    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:02.064632    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:02.064647    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:02.077447    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:02.077460    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:02.094486    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:02.094496    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:04.611678    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:04.988547    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:04.988691    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:05.002668    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:05.002766    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:05.014653    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:05.014732    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:05.032120    4385 logs.go:276] 2 containers: [d70eedf42cf6 77dfe0886a80]
	I0924 12:15:05.032203    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:05.042695    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:05.042780    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:05.053755    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:05.053827    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:05.066811    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:05.066878    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:05.077106    4385 logs.go:276] 0 containers: []
	W0924 12:15:05.077117    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:05.077174    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:05.087976    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:05.087991    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:05.087999    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:05.125096    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:05.125105    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:05.129319    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:05.129330    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:05.165265    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:05.165276    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:05.180164    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:05.180175    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:05.192263    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:05.192274    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:05.203919    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:05.203929    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:05.218176    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:05.218187    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:05.229500    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:05.229510    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:05.244054    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:05.244066    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:05.262397    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:05.262407    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:05.273899    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:05.273907    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:05.298894    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:05.298905    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:09.612893    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:09.613087    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:09.626330    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:09.626421    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:09.637082    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:09.637170    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:09.648327    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:09.648404    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:09.659178    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:09.659266    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:09.669783    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:09.669855    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:09.680388    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:09.680467    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:09.690756    4520 logs.go:276] 0 containers: []
	W0924 12:15:09.690770    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:09.690833    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:09.701060    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:09.701079    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:09.701085    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:09.740274    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:09.740283    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:09.777707    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:09.777718    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:09.792534    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:09.792547    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:09.803704    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:09.803717    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:09.816083    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:09.816094    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:09.838964    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:09.838979    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:09.843218    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:09.843224    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:09.860949    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:09.860960    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:09.873094    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:09.873105    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:09.910927    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:09.910943    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:09.925952    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:09.925968    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:09.941231    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:09.941243    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:09.957192    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:09.957206    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:09.972545    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:09.972557    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:09.984770    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:09.984781    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:09.998946    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:09.998958    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:07.812666    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:12.512939    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:12.814888    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:12.815163    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:12.839585    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:12.839744    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:12.855928    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:12.856019    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:12.868565    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:12.868658    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:12.881416    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:12.881494    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:12.891783    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:12.891867    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:12.902406    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:12.902493    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:12.913070    4385 logs.go:276] 0 containers: []
	W0924 12:15:12.913082    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:12.913155    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:12.923496    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:12.923525    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:12.923533    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:12.948159    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:12.948185    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:12.971638    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:12.971651    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:13.007485    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:13.007502    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:13.019674    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:13.019686    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:13.045781    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:13.045795    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:13.057266    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:13.057279    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:13.090244    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:13.090252    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:13.104923    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:13.104936    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:13.116772    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:13.116785    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:13.128333    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:13.128345    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:13.139809    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:13.139822    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:13.144516    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:13.144524    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:13.158714    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:13.158728    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:13.174436    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:13.174448    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:15.687199    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:17.513479    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:17.513727    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:17.530794    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:17.530901    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:17.543246    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:17.543339    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:17.554395    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:17.554479    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:17.564993    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:17.565078    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:17.575425    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:17.575510    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:17.586060    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:17.586138    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:17.596268    4520 logs.go:276] 0 containers: []
	W0924 12:15:17.596280    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:17.596353    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:17.606317    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:17.606348    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:17.606354    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:17.610808    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:17.610814    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:17.624932    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:17.624946    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:17.638807    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:17.638817    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:17.652033    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:17.652044    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:17.690726    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:17.690736    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:17.725899    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:17.725913    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:17.740519    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:17.740529    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:17.756705    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:17.756720    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:17.779690    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:17.779700    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:17.791254    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:17.791265    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:17.805989    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:17.806000    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:17.817682    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:17.817693    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:17.857642    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:17.857657    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:17.877153    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:17.877163    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:17.894468    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:17.894479    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:17.916736    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:17.916751    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:20.430168    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:20.689653    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:20.689865    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:20.707643    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:20.707755    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:20.722123    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:20.722214    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:20.733463    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:20.733556    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:20.744140    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:20.744226    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:20.755158    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:20.755235    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:20.765419    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:20.765506    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:20.775743    4385 logs.go:276] 0 containers: []
	W0924 12:15:20.775754    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:20.775824    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:20.789200    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:20.789218    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:20.789224    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:25.432524    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:25.432778    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:25.450117    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:25.450228    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:25.462745    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:25.462836    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:25.473521    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:25.473593    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:25.485391    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:25.485474    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:25.496362    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:25.496451    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:25.506998    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:25.507084    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:25.517398    4520 logs.go:276] 0 containers: []
	W0924 12:15:25.517411    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:25.517475    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:25.531711    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:25.531730    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:25.531735    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:25.546877    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:25.546892    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:25.561133    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:25.561148    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:25.572095    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:25.572106    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:25.582988    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:25.582999    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:25.606971    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:25.606978    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:25.645050    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:25.645066    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:25.649453    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:25.649462    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:25.684145    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:25.684160    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:25.695793    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:25.695807    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:25.707343    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:25.707358    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:20.800876    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:20.800936    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:20.814034    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:20.814047    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:20.832536    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:20.832550    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:20.850112    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:20.850128    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:20.874269    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:20.874277    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:20.907183    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:20.907190    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:20.946189    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:20.946201    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:20.957908    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:20.957922    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:20.974702    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:20.974717    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:20.990013    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:20.990024    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:21.001465    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:21.001480    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:21.005937    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:21.005943    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:21.020086    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:21.020099    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:21.031422    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:21.031435    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:23.546485    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:25.722602    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:25.722616    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:25.739018    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:25.739028    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:25.750492    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:25.750504    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:25.769518    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:25.769526    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:25.784513    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:25.784521    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:25.822651    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:25.822666    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:28.337123    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:28.548751    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:28.548872    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:28.561387    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:28.561480    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:28.572108    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:28.572193    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:28.582839    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:28.582931    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:28.593395    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:28.593475    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:28.603700    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:28.603775    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:28.614553    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:28.614636    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:28.624733    4385 logs.go:276] 0 containers: []
	W0924 12:15:28.624745    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:28.624820    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:28.635830    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:28.635849    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:28.635855    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:28.650482    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:28.650493    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:28.668170    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:28.668181    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:28.704004    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:28.704015    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:28.718527    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:28.718542    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:28.734715    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:28.734728    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:28.769097    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:28.769105    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:28.780314    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:28.780327    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:28.791642    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:28.791655    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:28.806266    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:28.806279    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:28.821325    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:28.821335    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:28.833046    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:28.833060    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:28.844225    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:28.844236    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:28.868841    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:28.868849    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:28.873096    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:28.873106    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:33.339448    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:33.339699    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:33.376928    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:33.377022    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:33.389041    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:33.389132    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:33.399697    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:33.399780    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:33.410280    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:33.410362    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:33.420543    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:33.420623    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:33.431501    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:33.431586    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:33.441661    4520 logs.go:276] 0 containers: []
	W0924 12:15:33.441674    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:33.441746    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:33.452509    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:33.452527    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:33.452534    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:33.494586    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:33.494595    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:33.510670    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:33.510685    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:33.522763    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:33.522774    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:33.537605    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:33.537619    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:33.542180    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:33.542186    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:33.576632    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:33.576645    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:33.590040    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:33.590052    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:33.604648    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:33.604661    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:33.616236    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:33.616248    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:33.628400    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:33.628413    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:33.645848    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:33.645862    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:33.658394    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:33.658406    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:33.676049    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:33.676059    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:33.687414    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:33.687425    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:33.711747    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:33.711757    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:33.750046    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:33.750054    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:31.386730    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:36.270691    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:36.389034    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:36.389262    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:36.413222    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:36.413347    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:36.428294    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:36.428390    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:36.441386    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:36.441483    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:36.452692    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:36.452787    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:36.463558    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:36.463634    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:36.473728    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:36.473799    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:36.483906    4385 logs.go:276] 0 containers: []
	W0924 12:15:36.483919    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:36.483991    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:36.498892    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:36.498909    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:36.498914    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:36.511010    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:36.511021    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:36.524346    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:36.524359    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:36.541504    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:36.541519    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:36.552935    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:36.552945    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:36.577652    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:36.577660    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:36.582015    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:36.582023    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:36.596181    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:36.596194    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:36.612389    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:36.612401    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:36.637242    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:36.637256    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:36.672369    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:36.672379    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:36.706731    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:36.706743    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:36.718699    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:36.718714    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:36.730953    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:36.730968    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:36.742965    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:36.742978    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:39.259657    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:41.273037    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:41.273238    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:41.287422    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:41.287505    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:41.299995    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:41.300085    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:41.310813    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:41.310895    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:41.320845    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:41.320921    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:41.331073    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:41.331148    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:41.341881    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:41.341965    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:41.352443    4520 logs.go:276] 0 containers: []
	W0924 12:15:41.352461    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:41.352540    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:41.362837    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:41.362857    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:41.362862    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:41.374438    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:41.374450    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:41.385954    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:41.385967    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:41.404781    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:41.404795    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:41.417526    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:41.417543    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:41.432352    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:41.432363    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:41.444068    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:41.444078    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:41.480574    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:41.480582    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:41.494896    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:41.494906    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:41.510078    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:41.510088    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:41.524660    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:41.524670    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:41.542128    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:41.542136    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:41.546554    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:41.546562    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:41.580569    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:41.580585    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:41.618453    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:41.618463    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:41.632320    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:41.632333    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:41.651567    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:41.651577    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:44.177157    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:44.261953    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:44.262103    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:44.279944    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:44.280044    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:44.293970    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:44.294059    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:44.306949    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:44.307058    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:44.317734    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:44.317822    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:44.327734    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:44.327818    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:44.338570    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:44.338658    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:44.349131    4385 logs.go:276] 0 containers: []
	W0924 12:15:44.349144    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:44.349220    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:44.359432    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:44.359448    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:44.359455    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:44.396998    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:44.397006    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:44.414696    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:44.414707    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:44.432896    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:44.432912    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:44.445185    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:44.445199    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:44.470316    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:44.470325    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:44.482207    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:44.482223    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:44.516729    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:44.516742    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:44.531291    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:44.531304    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:44.543305    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:44.543317    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:44.557484    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:44.557498    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:44.570979    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:44.570993    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:44.575369    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:44.575377    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:44.587162    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:44.587176    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:44.598526    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:44.598540    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:49.179390    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:49.179649    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:49.213937    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:49.214073    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:49.246400    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:49.246494    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:49.262134    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:49.262220    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:49.272299    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:49.272385    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:49.282603    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:49.282686    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:49.293138    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:49.293211    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:49.303449    4520 logs.go:276] 0 containers: []
	W0924 12:15:49.303462    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:49.303537    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:49.314183    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:49.314200    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:49.314205    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:49.318419    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:49.318429    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:49.337053    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:49.337064    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:49.349249    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:49.349261    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:49.372949    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:49.372957    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:49.384173    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:49.384182    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:49.398298    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:49.398312    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:49.436496    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:49.436511    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:49.449895    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:49.449904    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:49.464930    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:49.464941    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:49.488614    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:49.488627    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:49.500517    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:49.500528    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:49.538400    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:49.538412    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:49.552232    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:49.552242    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:49.588082    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:49.588095    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:49.603081    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:49.603095    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:49.616115    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:49.616125    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:47.112083    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:52.128438    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:52.114449    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:52.114721    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:52.139194    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:52.139316    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:52.155254    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:15:52.155356    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:52.168064    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:15:52.168158    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:52.179557    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:15:52.179633    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:52.190186    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:15:52.190273    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:52.200921    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:15:52.201002    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:52.210821    4385 logs.go:276] 0 containers: []
	W0924 12:15:52.210832    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:52.210932    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:52.221424    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:15:52.221444    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:15:52.221451    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:15:52.237789    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:15:52.237801    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:15:52.249057    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:52.249072    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:52.284348    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:52.284359    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:52.319348    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:15:52.319359    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:15:52.331755    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:52.331766    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:52.336578    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:52.336587    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:52.360584    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:15:52.360594    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:15:52.380953    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:15:52.380965    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:52.392639    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:15:52.392650    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:15:52.404816    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:15:52.404833    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:15:52.416733    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:15:52.416749    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:15:52.428247    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:15:52.428262    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:15:52.448461    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:15:52.448475    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:15:52.462162    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:15:52.462173    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:15:54.976490    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:57.130707    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:57.130905    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:57.145779    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:57.145872    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:57.157319    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:57.157398    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:57.169043    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:57.169124    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:57.180523    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:57.180613    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:57.191320    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:57.191403    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:57.211213    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:57.211295    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:57.221609    4520 logs.go:276] 0 containers: []
	W0924 12:15:57.221621    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:57.221690    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:57.232290    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:57.232308    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:57.232314    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:57.271832    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:57.271840    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:57.275932    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:57.275941    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:57.294392    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:57.294405    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:57.306161    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:57.306175    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:57.319936    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:57.319949    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:57.331450    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:57.331459    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:57.365520    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:57.365534    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:57.379771    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:57.379785    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:57.393673    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:57.393685    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:57.405041    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:57.405051    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:57.420724    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:57.420732    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:57.438010    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:57.438020    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:57.459972    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:57.459980    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:57.502474    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:57.502483    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:57.514830    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:57.514842    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:57.531633    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:57.531644    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:00.044954    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:59.977081    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:59.977213    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:59.993655    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:15:59.993744    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:00.006720    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:00.006808    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:00.017117    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:00.017210    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:00.030732    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:00.030809    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:00.041213    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:00.041299    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:00.051578    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:00.051652    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:00.062335    4385 logs.go:276] 0 containers: []
	W0924 12:16:00.062350    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:00.062426    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:00.073449    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:00.073465    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:00.073471    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:00.084952    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:00.084964    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:00.089649    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:00.089658    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:00.104616    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:00.104632    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:00.116331    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:00.116341    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:00.133791    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:00.133804    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:00.145707    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:00.145723    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:00.178905    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:00.178915    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:00.190457    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:00.190469    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:00.206309    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:00.206321    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:00.220238    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:00.220249    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:00.239673    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:00.239684    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:00.251105    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:00.251115    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:00.262587    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:00.262598    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:00.287873    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:00.287881    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:05.045456    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:05.045680    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:05.063802    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:16:05.063911    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:05.076957    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:16:05.077047    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:05.088456    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:16:05.088527    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:05.098799    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:16:05.098886    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:05.108796    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:16:05.108873    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:05.119074    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:16:05.119159    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:05.128930    4520 logs.go:276] 0 containers: []
	W0924 12:16:05.128948    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:05.129025    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:05.139434    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:16:05.139455    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:05.139461    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:05.174743    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:16:05.174755    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:16:05.189545    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:16:05.189554    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:16:05.201524    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:16:05.201534    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:16:05.216274    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:16:05.216285    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:16:05.228074    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:16:05.228084    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:16:05.241341    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:16:05.241350    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:16:05.252671    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:05.252681    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:05.276147    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:05.276154    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:05.280612    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:16:05.280621    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:16:05.294451    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:16:05.294462    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:16:05.305557    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:16:05.305568    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:16:05.319718    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:16:05.319730    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:05.332778    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:05.332789    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:05.370862    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:16:05.370870    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:16:05.413222    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:16:05.413234    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:16:05.430383    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:16:05.430396    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:16:02.826060    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:07.943879    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:07.828366    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:07.828601    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:07.843347    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:07.843448    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:07.855551    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:07.855627    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:07.866472    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:07.866551    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:07.877007    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:07.877081    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:07.887043    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:07.887121    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:07.897943    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:07.898027    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:07.920034    4385 logs.go:276] 0 containers: []
	W0924 12:16:07.920044    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:07.920107    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:07.934138    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:07.934159    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:07.934165    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:07.968054    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:07.968068    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:07.972359    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:07.972365    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:07.984174    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:07.984189    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:08.001095    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:08.001106    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:08.012874    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:08.012891    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:08.046742    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:08.046757    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:08.061905    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:08.061916    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:08.073806    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:08.073821    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:08.087724    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:08.087737    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:08.099253    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:08.099267    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:08.111095    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:08.111110    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:08.128599    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:08.128612    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:08.153523    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:08.153530    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:08.165591    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:08.165605    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:10.679584    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:12.945969    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:12.946125    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:12.958252    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:16:12.958351    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:12.969828    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:16:12.969916    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:12.980669    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:16:12.980753    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:12.991417    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:16:12.991505    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:13.001784    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:16:13.001870    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:13.012955    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:16:13.013031    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:13.023772    4520 logs.go:276] 0 containers: []
	W0924 12:16:13.023784    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:13.023860    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:13.033750    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:16:13.033769    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:16:13.033774    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:16:13.047690    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:16:13.047699    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:16:13.085975    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:16:13.085991    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:16:13.097645    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:16:13.097655    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:16:13.115722    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:16:13.115731    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:16:13.126696    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:13.126706    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:13.163860    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:16:13.163870    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:16:13.178399    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:16:13.178409    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:16:13.190269    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:16:13.190280    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:16:13.201487    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:13.201499    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:13.224944    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:16:13.224953    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:13.236676    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:13.236688    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:13.241261    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:13.241270    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:13.275773    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:16:13.275784    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:16:13.292612    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:16:13.292623    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:16:13.307148    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:16:13.307164    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:16:13.322437    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:16:13.322447    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:16:15.682000    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:15.682268    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:15.704578    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:15.704720    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:15.720701    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:15.720799    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:15.733274    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:15.733366    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:15.745301    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:15.745378    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:15.760026    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:15.760106    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:15.770392    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:15.770475    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:15.780430    4385 logs.go:276] 0 containers: []
	W0924 12:16:15.780440    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:15.780503    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:15.790602    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:15.790624    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:15.790629    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:15.839346    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:15.802302    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:15.802318    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:15.827437    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:15.827447    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:15.832047    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:15.832057    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:15.846473    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:15.846483    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:15.884385    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:15.884396    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:15.898823    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:15.898834    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:15.910698    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:15.910709    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:15.925668    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:15.925680    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:15.938259    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:15.938270    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:15.953615    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:15.953630    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:15.969439    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:15.969450    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:15.986947    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:15.986960    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:15.999670    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:15.999683    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:16.034782    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:16.034794    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:18.551388    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:20.841645    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:20.842241    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:20.879166    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:16:20.879336    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:20.900710    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:16:20.900829    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:20.917291    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:16:20.917390    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:20.929738    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:16:20.929824    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:20.943040    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:16:20.943128    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:20.954183    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:16:20.954272    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:20.964951    4520 logs.go:276] 0 containers: []
	W0924 12:16:20.964963    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:20.965037    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:20.975539    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:16:20.975557    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:20.975563    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:20.980102    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:16:20.980108    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:16:20.992411    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:16:20.992420    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:16:21.004475    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:16:21.004484    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:21.017684    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:16:21.017697    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:16:21.056004    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:16:21.056015    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:16:21.068000    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:16:21.068013    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:16:21.091437    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:16:21.091446    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:16:21.110845    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:21.110854    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:21.152610    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:16:21.152623    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:16:21.168583    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:16:21.168596    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:16:21.183629    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:16:21.183645    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:16:21.194860    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:21.194871    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:21.216538    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:21.216546    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:21.253540    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:16:21.253550    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:16:21.268074    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:16:21.268088    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:16:21.282928    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:16:21.282939    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:16:23.796658    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:23.553487    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:23.553722    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:23.574749    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:23.574853    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:23.587672    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:23.587761    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:23.598772    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:23.598862    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:23.609625    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:23.609710    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:23.620144    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:23.620226    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:23.630548    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:23.630625    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:23.640467    4385 logs.go:276] 0 containers: []
	W0924 12:16:23.640481    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:23.640558    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:23.651687    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:23.651709    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:23.651717    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:23.665697    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:23.665708    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:23.680085    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:23.680101    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:23.700621    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:23.700637    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:23.716292    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:23.716305    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:23.728471    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:23.728481    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:23.740087    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:23.740097    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:23.764880    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:23.764888    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:23.799161    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:23.799169    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:23.803521    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:23.803529    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:23.838465    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:23.838475    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:23.852929    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:23.852939    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:23.864983    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:23.864994    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:23.876904    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:23.876918    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:23.890714    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:23.890730    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:28.798927    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:28.799082    4520 kubeadm.go:597] duration metric: took 4m3.857838083s to restartPrimaryControlPlane
	W0924 12:16:28.799205    4520 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 12:16:28.799251    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0924 12:16:29.788346    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 12:16:29.793338    4520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 12:16:29.796186    4520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 12:16:29.798825    4520 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 12:16:29.798832    4520 kubeadm.go:157] found existing configuration files:
	
	I0924 12:16:29.798862    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/admin.conf
	I0924 12:16:29.801314    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 12:16:29.801347    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 12:16:29.804392    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/kubelet.conf
	I0924 12:16:29.807485    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 12:16:29.807508    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 12:16:29.810083    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/controller-manager.conf
	I0924 12:16:29.812602    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 12:16:29.812623    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 12:16:29.816026    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/scheduler.conf
	I0924 12:16:29.819347    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 12:16:29.819370    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 12:16:29.822025    4520 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 12:16:29.838727    4520 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0924 12:16:29.838869    4520 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 12:16:29.885917    4520 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 12:16:29.885980    4520 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 12:16:29.886046    4520 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 12:16:29.940606    4520 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 12:16:29.944793    4520 out.go:235]   - Generating certificates and keys ...
	I0924 12:16:29.944831    4520 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 12:16:29.944865    4520 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 12:16:29.944915    4520 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 12:16:29.944951    4520 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 12:16:29.944986    4520 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 12:16:29.945020    4520 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 12:16:29.945093    4520 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 12:16:29.945125    4520 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 12:16:29.945163    4520 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 12:16:29.945204    4520 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 12:16:29.945227    4520 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 12:16:29.945259    4520 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 12:16:30.074324    4520 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 12:16:30.298264    4520 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 12:16:30.483238    4520 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 12:16:30.621913    4520 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 12:16:30.650585    4520 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 12:16:30.650952    4520 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 12:16:30.651044    4520 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 12:16:30.750133    4520 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 12:16:26.404726    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:30.753759    4520 out.go:235]   - Booting up control plane ...
	I0924 12:16:30.753842    4520 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 12:16:30.753881    4520 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 12:16:30.753915    4520 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 12:16:30.754000    4520 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 12:16:30.766439    4520 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 12:16:35.268949    4520 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502323 seconds
	I0924 12:16:35.269030    4520 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 12:16:35.273506    4520 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 12:16:31.407046    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:31.407355    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:31.430112    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:31.430256    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:31.446865    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:31.446959    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:31.459492    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:31.459575    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:31.472785    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:31.472869    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:31.485178    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:31.485269    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:31.496685    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:31.496820    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:31.508271    4385 logs.go:276] 0 containers: []
	W0924 12:16:31.508284    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:31.508360    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:31.519861    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:31.519880    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:31.519886    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:31.533087    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:31.533100    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:31.552989    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:31.553011    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:31.591919    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:31.591936    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:31.608592    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:31.608608    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:31.625615    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:31.625625    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:31.645224    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:31.645237    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:31.672793    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:31.672808    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:31.685647    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:31.685663    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:31.701093    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:31.701109    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:31.713805    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:31.713817    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:31.750379    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:31.750398    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:31.755178    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:31.755190    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:31.767830    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:31.767845    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:31.783831    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:31.783848    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:34.303897    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:35.796010    4520 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 12:16:35.796527    4520 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-164000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 12:16:36.299967    4520 kubeadm.go:310] [bootstrap-token] Using token: c9u9by.23bn0i7xcp6mmhzp
	I0924 12:16:36.302920    4520 out.go:235]   - Configuring RBAC rules ...
	I0924 12:16:36.302984    4520 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 12:16:36.303035    4520 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 12:16:36.304952    4520 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 12:16:36.309627    4520 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 12:16:36.310567    4520 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 12:16:36.311427    4520 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 12:16:36.316021    4520 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 12:16:36.484223    4520 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 12:16:36.704005    4520 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 12:16:36.704500    4520 kubeadm.go:310] 
	I0924 12:16:36.704543    4520 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 12:16:36.704548    4520 kubeadm.go:310] 
	I0924 12:16:36.704585    4520 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 12:16:36.704658    4520 kubeadm.go:310] 
	I0924 12:16:36.704686    4520 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 12:16:36.704757    4520 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 12:16:36.704797    4520 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 12:16:36.704805    4520 kubeadm.go:310] 
	I0924 12:16:36.704834    4520 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 12:16:36.704842    4520 kubeadm.go:310] 
	I0924 12:16:36.704887    4520 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 12:16:36.704891    4520 kubeadm.go:310] 
	I0924 12:16:36.704958    4520 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 12:16:36.705001    4520 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 12:16:36.705037    4520 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 12:16:36.705040    4520 kubeadm.go:310] 
	I0924 12:16:36.705131    4520 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 12:16:36.705223    4520 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 12:16:36.705227    4520 kubeadm.go:310] 
	I0924 12:16:36.705317    4520 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c9u9by.23bn0i7xcp6mmhzp \
	I0924 12:16:36.705409    4520 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4250e15ce19ea6ee8d936fb77d1a59ad22f9367fb00a8a9aa9e1b7fb7d1933b3 \
	I0924 12:16:36.705458    4520 kubeadm.go:310] 	--control-plane 
	I0924 12:16:36.705462    4520 kubeadm.go:310] 
	I0924 12:16:36.705505    4520 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 12:16:36.705513    4520 kubeadm.go:310] 
	I0924 12:16:36.705584    4520 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c9u9by.23bn0i7xcp6mmhzp \
	I0924 12:16:36.705644    4520 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4250e15ce19ea6ee8d936fb77d1a59ad22f9367fb00a8a9aa9e1b7fb7d1933b3 
	I0924 12:16:36.705729    4520 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 12:16:36.705740    4520 cni.go:84] Creating CNI manager for ""
	I0924 12:16:36.705748    4520 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:16:36.710171    4520 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 12:16:36.717182    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 12:16:36.720106    4520 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 12:16:36.725042    4520 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 12:16:36.725094    4520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 12:16:36.725108    4520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-164000 minikube.k8s.io/updated_at=2024_09_24T12_16_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=stopped-upgrade-164000 minikube.k8s.io/primary=true
	I0924 12:16:36.766278    4520 ops.go:34] apiserver oom_adj: -16
	I0924 12:16:36.766295    4520 kubeadm.go:1113] duration metric: took 41.238708ms to wait for elevateKubeSystemPrivileges
	I0924 12:16:36.766433    4520 kubeadm.go:394] duration metric: took 4m11.84662725s to StartCluster
	I0924 12:16:36.766446    4520 settings.go:142] acquiring lock: {Name:mk8f5a1e4973fb47308ad8c9735bcc716ada1e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:16:36.766531    4520 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:16:36.766990    4520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/kubeconfig: {Name:mk406b8f0f5e016c0aa63af8364801bb91be8bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:16:36.767206    4520 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:16:36.767214    4520 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 12:16:36.767254    4520 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-164000"
	I0924 12:16:36.767272    4520 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-164000"
	W0924 12:16:36.767276    4520 addons.go:243] addon storage-provisioner should already be in state true
	I0924 12:16:36.767285    4520 host.go:66] Checking if "stopped-upgrade-164000" exists ...
	I0924 12:16:36.767294    4520 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-164000"
	I0924 12:16:36.767300    4520 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-164000"
	I0924 12:16:36.767284    4520 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:16:36.768229    4520 kapi.go:59] client config for stopped-upgrade-164000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/client.key", CAFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10666e030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 12:16:36.768350    4520 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-164000"
	W0924 12:16:36.768354    4520 addons.go:243] addon default-storageclass should already be in state true
	I0924 12:16:36.768362    4520 host.go:66] Checking if "stopped-upgrade-164000" exists ...
	I0924 12:16:36.771187    4520 out.go:177] * Verifying Kubernetes components...
	I0924 12:16:36.771494    4520 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 12:16:36.775324    4520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 12:16:36.775331    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	I0924 12:16:36.779132    4520 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:16:36.785163    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:16:36.788192    4520 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 12:16:36.788198    4520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 12:16:36.788206    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	I0924 12:16:36.874239    4520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 12:16:36.878929    4520 api_server.go:52] waiting for apiserver process to appear ...
	I0924 12:16:36.878980    4520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:16:36.882822    4520 api_server.go:72] duration metric: took 115.606833ms to wait for apiserver process to appear ...
	I0924 12:16:36.882830    4520 api_server.go:88] waiting for apiserver healthz status ...
	I0924 12:16:36.882837    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:36.894928    4520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 12:16:36.951771    4520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 12:16:37.279623    4520 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 12:16:37.279635    4520 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 12:16:39.306173    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:39.306486    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:39.331422    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:39.331557    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:39.349641    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:39.349739    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:39.362474    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:39.362567    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:39.373845    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:39.373930    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:39.384645    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:39.384729    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:39.395341    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:39.395430    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:39.405405    4385 logs.go:276] 0 containers: []
	W0924 12:16:39.405421    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:39.405490    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:39.415656    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:39.415674    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:39.415680    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:39.427438    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:39.427449    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:39.449138    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:39.449147    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:39.473937    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:39.473944    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:39.516632    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:39.516650    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:39.528414    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:39.528430    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:39.540326    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:39.540341    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:39.555101    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:39.555115    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:39.566640    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:39.566651    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:39.580969    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:39.580984    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:39.592496    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:39.592510    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:39.597249    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:39.597256    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:39.611391    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:39.611406    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:39.622858    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:39.622869    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:39.656607    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:39.656618    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:41.884898    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:41.884940    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:42.180505    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:46.885541    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:46.885571    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:47.182556    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:47.182726    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:47.195355    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:47.195445    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:47.206485    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:47.206581    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:47.219813    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:47.219898    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:47.230333    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:47.230412    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:47.243753    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:47.243838    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:47.254217    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:47.254299    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:47.264276    4385 logs.go:276] 0 containers: []
	W0924 12:16:47.264287    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:47.264359    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:47.274857    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:47.274877    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:47.274883    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:47.279330    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:47.279339    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:47.314456    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:47.314467    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:47.331882    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:47.331896    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:47.367079    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:47.367088    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:47.379270    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:47.379281    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:47.390876    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:47.390888    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:47.402308    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:47.402319    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:47.416594    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:47.416604    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:47.428471    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:47.428481    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:47.443027    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:47.443040    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:47.455121    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:47.455135    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:47.467269    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:47.467283    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:47.482373    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:47.482390    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:47.494366    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:47.494377    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:50.020109    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:51.885945    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:51.885984    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:55.022322    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:55.022488    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:55.041217    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:16:55.041313    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:55.055019    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:16:55.055113    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:55.066186    4385 logs.go:276] 4 containers: [9cf23ff694c1 3768dd912d0b d70eedf42cf6 77dfe0886a80]
	I0924 12:16:55.066268    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:55.076927    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:16:55.077017    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:55.087320    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:16:55.087406    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:55.097679    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:16:55.097757    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:55.108002    4385 logs.go:276] 0 containers: []
	W0924 12:16:55.108013    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:55.108085    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:55.118763    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:16:55.118791    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:16:55.118797    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:16:55.133328    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:16:55.133339    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:16:55.146697    4385 logs.go:123] Gathering logs for coredns [d70eedf42cf6] ...
	I0924 12:16:55.146708    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d70eedf42cf6"
	I0924 12:16:55.159058    4385 logs.go:123] Gathering logs for coredns [77dfe0886a80] ...
	I0924 12:16:55.159074    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77dfe0886a80"
	I0924 12:16:55.170703    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:16:55.170714    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:16:55.181862    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:16:55.181872    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:55.193281    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:55.193298    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:55.198179    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:55.198186    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:55.233277    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:16:55.233289    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:16:55.245705    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:16:55.245721    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:16:55.257673    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:16:55.257684    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:16:55.276660    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:16:55.276670    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:16:55.294457    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:55.294468    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:55.327990    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:16:55.327997    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:16:55.342602    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:55.342615    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:56.886520    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:56.886582    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:57.867982    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:01.887372    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:01.887420    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:02.870266    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:02.870401    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:17:02.881617    4385 logs.go:276] 1 containers: [7a189c15c27c]
	I0924 12:17:02.881706    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:17:02.892324    4385 logs.go:276] 1 containers: [3aa21a075b24]
	I0924 12:17:02.892415    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:17:02.903870    4385 logs.go:276] 4 containers: [2f88f1e45b5c 34ce64cc0e05 9cf23ff694c1 3768dd912d0b]
	I0924 12:17:02.903955    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:17:02.914219    4385 logs.go:276] 1 containers: [5c8bbb3e6700]
	I0924 12:17:02.914312    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:17:02.924864    4385 logs.go:276] 1 containers: [1bdc5344b17d]
	I0924 12:17:02.924956    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:17:02.935492    4385 logs.go:276] 1 containers: [4abea8a839d2]
	I0924 12:17:02.935582    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:17:02.947491    4385 logs.go:276] 0 containers: []
	W0924 12:17:02.947504    4385 logs.go:278] No container was found matching "kindnet"
	I0924 12:17:02.947583    4385 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:17:02.959420    4385 logs.go:276] 1 containers: [1893b5bb7145]
	I0924 12:17:02.959440    4385 logs.go:123] Gathering logs for kubelet ...
	I0924 12:17:02.959447    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:17:02.993982    4385 logs.go:123] Gathering logs for coredns [34ce64cc0e05] ...
	I0924 12:17:02.993996    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ce64cc0e05"
	I0924 12:17:03.005752    4385 logs.go:123] Gathering logs for container status ...
	I0924 12:17:03.005767    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:17:03.018860    4385 logs.go:123] Gathering logs for storage-provisioner [1893b5bb7145] ...
	I0924 12:17:03.018873    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1893b5bb7145"
	I0924 12:17:03.033051    4385 logs.go:123] Gathering logs for Docker ...
	I0924 12:17:03.033062    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:17:03.057830    4385 logs.go:123] Gathering logs for coredns [2f88f1e45b5c] ...
	I0924 12:17:03.057843    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f88f1e45b5c"
	I0924 12:17:03.071556    4385 logs.go:123] Gathering logs for coredns [9cf23ff694c1] ...
	I0924 12:17:03.071568    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9cf23ff694c1"
	I0924 12:17:03.088684    4385 logs.go:123] Gathering logs for kube-scheduler [5c8bbb3e6700] ...
	I0924 12:17:03.088700    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8bbb3e6700"
	I0924 12:17:03.104045    4385 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:17:03.104055    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:17:03.138488    4385 logs.go:123] Gathering logs for etcd [3aa21a075b24] ...
	I0924 12:17:03.138503    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa21a075b24"
	I0924 12:17:03.153018    4385 logs.go:123] Gathering logs for kube-proxy [1bdc5344b17d] ...
	I0924 12:17:03.153029    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdc5344b17d"
	I0924 12:17:03.165636    4385 logs.go:123] Gathering logs for kube-controller-manager [4abea8a839d2] ...
	I0924 12:17:03.165647    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4abea8a839d2"
	I0924 12:17:03.183852    4385 logs.go:123] Gathering logs for dmesg ...
	I0924 12:17:03.183863    4385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:17:03.188433    4385 logs.go:123] Gathering logs for kube-apiserver [7a189c15c27c] ...
	I0924 12:17:03.188440    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a189c15c27c"
	I0924 12:17:03.203070    4385 logs.go:123] Gathering logs for coredns [3768dd912d0b] ...
	I0924 12:17:03.203081    4385 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3768dd912d0b"
	I0924 12:17:05.717532    4385 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:06.888435    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:06.888483    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0924 12:17:07.281681    4520 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0924 12:17:07.285827    4520 out.go:177] * Enabled addons: storage-provisioner
	I0924 12:17:07.297801    4520 addons.go:510] duration metric: took 30.530796166s for enable addons: enabled=[storage-provisioner]
	I0924 12:17:10.719786    4385 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:10.724520    4385 out.go:201] 
	W0924 12:17:10.728505    4385 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0924 12:17:10.728510    4385 out.go:270] * 
	W0924 12:17:10.728931    4385 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:17:10.739464    4385 out.go:201] 
	I0924 12:17:11.890034    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:11.890096    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:16.891712    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:16.891758    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:21.894035    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:21.894084    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-09-24 19:08:21 UTC, ends at Tue 2024-09-24 19:17:26 UTC. --
	Sep 24 19:17:02 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 24 19:17:07 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 24 19:17:11 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:11Z" level=error msg="ContainerStats resp: {0x40004773c0 linux}"
	Sep 24 19:17:11 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:11Z" level=error msg="ContainerStats resp: {0x4000477800 linux}"
	Sep 24 19:17:12 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:12Z" level=error msg="ContainerStats resp: {0x400078a2c0 linux}"
	Sep 24 19:17:12 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:12Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 24 19:17:13 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:13Z" level=error msg="ContainerStats resp: {0x400078b040 linux}"
	Sep 24 19:17:13 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:13Z" level=error msg="ContainerStats resp: {0x4000114a80 linux}"
	Sep 24 19:17:13 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:13Z" level=error msg="ContainerStats resp: {0x400078a600 linux}"
	Sep 24 19:17:13 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:13Z" level=error msg="ContainerStats resp: {0x40001155c0 linux}"
	Sep 24 19:17:13 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:13Z" level=error msg="ContainerStats resp: {0x40000b0480 linux}"
	Sep 24 19:17:13 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:13Z" level=error msg="ContainerStats resp: {0x40000b1500 linux}"
	Sep 24 19:17:13 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:13Z" level=error msg="ContainerStats resp: {0x400078bcc0 linux}"
	Sep 24 19:17:17 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:17Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 24 19:17:22 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:22Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 24 19:17:23 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:23Z" level=error msg="ContainerStats resp: {0x4000903340 linux}"
	Sep 24 19:17:23 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:23Z" level=error msg="ContainerStats resp: {0x4000903740 linux}"
	Sep 24 19:17:24 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:24Z" level=error msg="ContainerStats resp: {0x40000af7c0 linux}"
	Sep 24 19:17:25 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:25Z" level=error msg="ContainerStats resp: {0x4000115ec0 linux}"
	Sep 24 19:17:25 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:25Z" level=error msg="ContainerStats resp: {0x400078a040 linux}"
	Sep 24 19:17:25 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:25Z" level=error msg="ContainerStats resp: {0x400078a4c0 linux}"
	Sep 24 19:17:25 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:25Z" level=error msg="ContainerStats resp: {0x400078a780 linux}"
	Sep 24 19:17:25 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:25Z" level=error msg="ContainerStats resp: {0x40004764c0 linux}"
	Sep 24 19:17:25 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:25Z" level=error msg="ContainerStats resp: {0x4000476c40 linux}"
	Sep 24 19:17:25 running-upgrade-070000 cri-dockerd[3009]: time="2024-09-24T19:17:25Z" level=error msg="ContainerStats resp: {0x4000476f40 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	2f88f1e45b5c1       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   af50fe2c15bc2
	34ce64cc0e054       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   939c54afb7306
	9cf23ff694c1c       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   939c54afb7306
	3768dd912d0be       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   af50fe2c15bc2
	1893b5bb71452       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   952e7247da1f4
	1bdc5344b17d2       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   0cf5f2297e913
	5c8bbb3e6700a       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   9e4847ba5ef15
	3aa21a075b24c       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   28e9a08018176
	7a189c15c27c9       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   8156682e39269
	4abea8a839d2f       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   ce34076692ce1
	
	
	==> coredns [2f88f1e45b5c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6181270068876487224.6393543998363537209. HINFO: read udp 10.244.0.3:41706->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6181270068876487224.6393543998363537209. HINFO: read udp 10.244.0.3:52079->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6181270068876487224.6393543998363537209. HINFO: read udp 10.244.0.3:34166->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6181270068876487224.6393543998363537209. HINFO: read udp 10.244.0.3:56541->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6181270068876487224.6393543998363537209. HINFO: read udp 10.244.0.3:56823->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6181270068876487224.6393543998363537209. HINFO: read udp 10.244.0.3:60761->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6181270068876487224.6393543998363537209. HINFO: read udp 10.244.0.3:34699->10.0.2.3:53: i/o timeout
	
	
	==> coredns [34ce64cc0e05] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7629396786213126092.8292506448374143935. HINFO: read udp 10.244.0.2:33380->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7629396786213126092.8292506448374143935. HINFO: read udp 10.244.0.2:53842->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7629396786213126092.8292506448374143935. HINFO: read udp 10.244.0.2:33811->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7629396786213126092.8292506448374143935. HINFO: read udp 10.244.0.2:58887->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7629396786213126092.8292506448374143935. HINFO: read udp 10.244.0.2:39686->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7629396786213126092.8292506448374143935. HINFO: read udp 10.244.0.2:34620->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7629396786213126092.8292506448374143935. HINFO: read udp 10.244.0.2:58536->10.0.2.3:53: i/o timeout
	
	
	==> coredns [3768dd912d0b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5681138110593153009.1808909118007642585. HINFO: read udp 10.244.0.3:42636->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5681138110593153009.1808909118007642585. HINFO: read udp 10.244.0.3:48682->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5681138110593153009.1808909118007642585. HINFO: read udp 10.244.0.3:44452->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5681138110593153009.1808909118007642585. HINFO: read udp 10.244.0.3:43990->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5681138110593153009.1808909118007642585. HINFO: read udp 10.244.0.3:58882->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5681138110593153009.1808909118007642585. HINFO: read udp 10.244.0.3:58251->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5681138110593153009.1808909118007642585. HINFO: read udp 10.244.0.3:33596->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5681138110593153009.1808909118007642585. HINFO: read udp 10.244.0.3:60185->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5681138110593153009.1808909118007642585. HINFO: read udp 10.244.0.3:56360->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5681138110593153009.1808909118007642585. HINFO: read udp 10.244.0.3:40746->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9cf23ff694c1] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8170589452303682897.549437724133328489. HINFO: read udp 10.244.0.2:34276->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8170589452303682897.549437724133328489. HINFO: read udp 10.244.0.2:56840->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8170589452303682897.549437724133328489. HINFO: read udp 10.244.0.2:44477->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8170589452303682897.549437724133328489. HINFO: read udp 10.244.0.2:42877->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8170589452303682897.549437724133328489. HINFO: read udp 10.244.0.2:42998->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8170589452303682897.549437724133328489. HINFO: read udp 10.244.0.2:47423->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8170589452303682897.549437724133328489. HINFO: read udp 10.244.0.2:53008->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8170589452303682897.549437724133328489. HINFO: read udp 10.244.0.2:45593->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8170589452303682897.549437724133328489. HINFO: read udp 10.244.0.2:40692->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8170589452303682897.549437724133328489. HINFO: read udp 10.244.0.2:55034->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-070000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-070000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=running-upgrade-070000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T12_13_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 19:13:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-070000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 19:17:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 19:13:09 +0000   Tue, 24 Sep 2024 19:13:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 19:13:09 +0000   Tue, 24 Sep 2024 19:13:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 19:13:09 +0000   Tue, 24 Sep 2024 19:13:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 19:13:09 +0000   Tue, 24 Sep 2024 19:13:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-070000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 97cec3b4181e429e8bf0013a67f7704d
	  System UUID:                97cec3b4181e429e8bf0013a67f7704d
	  Boot ID:                    c3069cb5-fc4d-44c4-94d7-9920236570a8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-x8dh5                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-xtcdj                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-070000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-070000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-controller-manager-running-upgrade-070000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-6ffnj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-070000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-070000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-070000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-070000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-070000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-070000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-070000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-070000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-070000 event: Registered Node running-upgrade-070000 in Controller
	
	
	==> dmesg <==
	[  +1.775923] systemd-fstab-generator[871]: Ignoring "noauto" for root device
	[  +0.065443] systemd-fstab-generator[882]: Ignoring "noauto" for root device
	[  +0.080648] systemd-fstab-generator[893]: Ignoring "noauto" for root device
	[  +0.190502] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +0.073699] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[  +2.079993] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +0.181960] kauditd_printk_skb: 92 callbacks suppressed
	[  +7.974044] systemd-fstab-generator[1944]: Ignoring "noauto" for root device
	[  +2.815143] systemd-fstab-generator[2224]: Ignoring "noauto" for root device
	[  +0.153504] systemd-fstab-generator[2259]: Ignoring "noauto" for root device
	[  +0.099355] systemd-fstab-generator[2270]: Ignoring "noauto" for root device
	[  +0.088760] systemd-fstab-generator[2283]: Ignoring "noauto" for root device
	[  +3.376066] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.191853] systemd-fstab-generator[2966]: Ignoring "noauto" for root device
	[  +0.085335] systemd-fstab-generator[2977]: Ignoring "noauto" for root device
	[  +0.064842] systemd-fstab-generator[2988]: Ignoring "noauto" for root device
	[  +0.090870] systemd-fstab-generator[3002]: Ignoring "noauto" for root device
	[  +2.152782] systemd-fstab-generator[3160]: Ignoring "noauto" for root device
	[  +3.625146] systemd-fstab-generator[3587]: Ignoring "noauto" for root device
	[  +1.611962] systemd-fstab-generator[3884]: Ignoring "noauto" for root device
	[Sep24 19:09] kauditd_printk_skb: 68 callbacks suppressed
	[Sep24 19:13] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.419968] systemd-fstab-generator[11883]: Ignoring "noauto" for root device
	[  +5.637754] systemd-fstab-generator[12489]: Ignoring "noauto" for root device
	[  +0.457395] systemd-fstab-generator[12625]: Ignoring "noauto" for root device
	
	
	==> etcd [3aa21a075b24] <==
	{"level":"info","ts":"2024-09-24T19:13:05.358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-24T19:13:05.359Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-24T19:13:05.367Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T19:13:05.367Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-24T19:13:05.367Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-24T19:13:05.368Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T19:13:05.368Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T19:13:05.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-24T19:13:05.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-24T19:13:05.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-24T19:13:05.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T19:13:05.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-24T19:13:05.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-24T19:13:05.833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-24T19:13:05.833Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:13:05.834Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:13:05.834Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:13:05.834Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T19:13:05.834Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-070000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T19:13:05.834Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:13:05.835Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-24T19:13:05.834Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T19:13:05.835Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T19:13:05.834Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T19:13:05.835Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:17:27 up 9 min,  0 users,  load average: 0.57, 0.43, 0.23
	Linux running-upgrade-070000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [7a189c15c27c] <==
	I0924 19:13:07.086548       1 controller.go:611] quota admission added evaluator for: namespaces
	I0924 19:13:07.129255       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 19:13:07.129299       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0924 19:13:07.129314       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0924 19:13:07.129320       1 cache.go:39] Caches are synced for autoregister controller
	I0924 19:13:07.129356       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0924 19:13:07.141640       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0924 19:13:07.880702       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0924 19:13:08.034682       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0924 19:13:08.039174       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0924 19:13:08.039706       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 19:13:08.178441       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 19:13:08.188056       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 19:13:08.291793       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0924 19:13:08.294240       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0924 19:13:08.294682       1 controller.go:611] quota admission added evaluator for: endpoints
	I0924 19:13:08.295956       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 19:13:09.161619       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0924 19:13:09.758711       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0924 19:13:09.762310       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0924 19:13:09.783108       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0924 19:13:09.819620       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 19:13:22.766078       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0924 19:13:22.816171       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0924 19:13:23.259034       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [4abea8a839d2] <==
	I0924 19:13:22.015954       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0924 19:13:22.016080       1 shared_informer.go:262] Caches are synced for TTL
	I0924 19:13:22.016196       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0924 19:13:22.016243       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0924 19:13:22.016273       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0924 19:13:22.016281       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0924 19:13:22.040837       1 shared_informer.go:262] Caches are synced for daemon sets
	I0924 19:13:22.079341       1 shared_informer.go:262] Caches are synced for deployment
	I0924 19:13:22.112792       1 shared_informer.go:262] Caches are synced for disruption
	I0924 19:13:22.112803       1 disruption.go:371] Sending events to api server.
	I0924 19:13:22.115526       1 shared_informer.go:262] Caches are synced for stateful set
	I0924 19:13:22.117189       1 shared_informer.go:262] Caches are synced for resource quota
	I0924 19:13:22.143972       1 shared_informer.go:262] Caches are synced for cronjob
	I0924 19:13:22.155161       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0924 19:13:22.174015       1 shared_informer.go:262] Caches are synced for resource quota
	I0924 19:13:22.182468       1 shared_informer.go:262] Caches are synced for job
	I0924 19:13:22.263489       1 shared_informer.go:262] Caches are synced for namespace
	I0924 19:13:22.265337       1 shared_informer.go:262] Caches are synced for service account
	I0924 19:13:22.632436       1 shared_informer.go:262] Caches are synced for garbage collector
	I0924 19:13:22.714989       1 shared_informer.go:262] Caches are synced for garbage collector
	I0924 19:13:22.715073       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0924 19:13:22.768439       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6ffnj"
	I0924 19:13:22.817563       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0924 19:13:23.017343       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-x8dh5"
	I0924 19:13:23.021595       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-xtcdj"
	
	
	==> kube-proxy [1bdc5344b17d] <==
	I0924 19:13:23.248646       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0924 19:13:23.248670       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0924 19:13:23.248678       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0924 19:13:23.257117       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0924 19:13:23.257126       1 server_others.go:206] "Using iptables Proxier"
	I0924 19:13:23.257137       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0924 19:13:23.257215       1 server.go:661] "Version info" version="v1.24.1"
	I0924 19:13:23.257222       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 19:13:23.257665       1 config.go:317] "Starting service config controller"
	I0924 19:13:23.257673       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0924 19:13:23.257674       1 config.go:226] "Starting endpoint slice config controller"
	I0924 19:13:23.257678       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0924 19:13:23.257865       1 config.go:444] "Starting node config controller"
	I0924 19:13:23.257869       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0924 19:13:23.357883       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0924 19:13:23.357917       1 shared_informer.go:262] Caches are synced for node config
	I0924 19:13:23.357921       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [5c8bbb3e6700] <==
	W0924 19:13:07.083861       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 19:13:07.083880       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0924 19:13:07.083907       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 19:13:07.083927       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0924 19:13:07.083977       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 19:13:07.083996       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0924 19:13:07.083244       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 19:13:07.084312       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0924 19:13:07.084153       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 19:13:07.084343       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0924 19:13:07.084165       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 19:13:07.084384       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0924 19:13:07.084178       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 19:13:07.084417       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0924 19:13:07.084191       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 19:13:07.084449       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0924 19:13:07.084215       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 19:13:07.084492       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0924 19:13:07.921166       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 19:13:07.921287       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0924 19:13:07.999992       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 19:13:08.000097       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0924 19:13:08.104964       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 19:13:08.105055       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0924 19:13:08.572445       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-09-24 19:08:21 UTC, ends at Tue 2024-09-24 19:17:27 UTC. --
	Sep 24 19:13:11 running-upgrade-070000 kubelet[12495]: E0924 19:13:11.989639   12495 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-070000\" already exists" pod="kube-system/etcd-running-upgrade-070000"
	Sep 24 19:13:21 running-upgrade-070000 kubelet[12495]: I0924 19:13:21.941188   12495 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 19:13:21 running-upgrade-070000 kubelet[12495]: I0924 19:13:21.996941   12495 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 24 19:13:21 running-upgrade-070000 kubelet[12495]: I0924 19:13:21.997037   12495 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/af387153-e808-4876-8c8f-7c58727e9e9a-tmp\") pod \"storage-provisioner\" (UID: \"af387153-e808-4876-8c8f-7c58727e9e9a\") " pod="kube-system/storage-provisioner"
	Sep 24 19:13:21 running-upgrade-070000 kubelet[12495]: I0924 19:13:21.997051   12495 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lbrs\" (UniqueName: \"kubernetes.io/projected/af387153-e808-4876-8c8f-7c58727e9e9a-kube-api-access-9lbrs\") pod \"storage-provisioner\" (UID: \"af387153-e808-4876-8c8f-7c58727e9e9a\") " pod="kube-system/storage-provisioner"
	Sep 24 19:13:21 running-upgrade-070000 kubelet[12495]: I0924 19:13:21.997456   12495 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: E0924 19:13:22.101228   12495 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: E0924 19:13:22.101244   12495 projected.go:192] Error preparing data for projected volume kube-api-access-9lbrs for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: E0924 19:13:22.101283   12495 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/af387153-e808-4876-8c8f-7c58727e9e9a-kube-api-access-9lbrs podName:af387153-e808-4876-8c8f-7c58727e9e9a nodeName:}" failed. No retries permitted until 2024-09-24 19:13:22.60127068 +0000 UTC m=+12.853678369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9lbrs" (UniqueName: "kubernetes.io/projected/af387153-e808-4876-8c8f-7c58727e9e9a-kube-api-access-9lbrs") pod "storage-provisioner" (UID: "af387153-e808-4876-8c8f-7c58727e9e9a") : configmap "kube-root-ca.crt" not found
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: E0924 19:13:22.606052   12495 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: E0924 19:13:22.606070   12495 projected.go:192] Error preparing data for projected volume kube-api-access-9lbrs for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: E0924 19:13:22.606103   12495 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/af387153-e808-4876-8c8f-7c58727e9e9a-kube-api-access-9lbrs podName:af387153-e808-4876-8c8f-7c58727e9e9a nodeName:}" failed. No retries permitted until 2024-09-24 19:13:23.606093835 +0000 UTC m=+13.858501523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9lbrs" (UniqueName: "kubernetes.io/projected/af387153-e808-4876-8c8f-7c58727e9e9a-kube-api-access-9lbrs") pod "storage-provisioner" (UID: "af387153-e808-4876-8c8f-7c58727e9e9a") : configmap "kube-root-ca.crt" not found
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: I0924 19:13:22.771518   12495 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: I0924 19:13:22.908070   12495 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/617b7f24-f206-4a99-baf3-8c2c5547ef3c-lib-modules\") pod \"kube-proxy-6ffnj\" (UID: \"617b7f24-f206-4a99-baf3-8c2c5547ef3c\") " pod="kube-system/kube-proxy-6ffnj"
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: I0924 19:13:22.908097   12495 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shr6\" (UniqueName: \"kubernetes.io/projected/617b7f24-f206-4a99-baf3-8c2c5547ef3c-kube-api-access-6shr6\") pod \"kube-proxy-6ffnj\" (UID: \"617b7f24-f206-4a99-baf3-8c2c5547ef3c\") " pod="kube-system/kube-proxy-6ffnj"
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: I0924 19:13:22.908109   12495 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/617b7f24-f206-4a99-baf3-8c2c5547ef3c-kube-proxy\") pod \"kube-proxy-6ffnj\" (UID: \"617b7f24-f206-4a99-baf3-8c2c5547ef3c\") " pod="kube-system/kube-proxy-6ffnj"
	Sep 24 19:13:22 running-upgrade-070000 kubelet[12495]: I0924 19:13:22.908129   12495 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/617b7f24-f206-4a99-baf3-8c2c5547ef3c-xtables-lock\") pod \"kube-proxy-6ffnj\" (UID: \"617b7f24-f206-4a99-baf3-8c2c5547ef3c\") " pod="kube-system/kube-proxy-6ffnj"
	Sep 24 19:13:23 running-upgrade-070000 kubelet[12495]: I0924 19:13:23.020925   12495 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 19:13:23 running-upgrade-070000 kubelet[12495]: I0924 19:13:23.025826   12495 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 19:13:23 running-upgrade-070000 kubelet[12495]: I0924 19:13:23.209168   12495 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2831a261-4253-4acc-af03-5f9c8fcfd596-config-volume\") pod \"coredns-6d4b75cb6d-x8dh5\" (UID: \"2831a261-4253-4acc-af03-5f9c8fcfd596\") " pod="kube-system/coredns-6d4b75cb6d-x8dh5"
	Sep 24 19:13:23 running-upgrade-070000 kubelet[12495]: I0924 19:13:23.209198   12495 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt6f9\" (UniqueName: \"kubernetes.io/projected/2831a261-4253-4acc-af03-5f9c8fcfd596-kube-api-access-kt6f9\") pod \"coredns-6d4b75cb6d-x8dh5\" (UID: \"2831a261-4253-4acc-af03-5f9c8fcfd596\") " pod="kube-system/coredns-6d4b75cb6d-x8dh5"
	Sep 24 19:13:23 running-upgrade-070000 kubelet[12495]: I0924 19:13:23.209219   12495 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddb0eb09-d7b1-4830-8f48-1196ea03f9e8-config-volume\") pod \"coredns-6d4b75cb6d-xtcdj\" (UID: \"ddb0eb09-d7b1-4830-8f48-1196ea03f9e8\") " pod="kube-system/coredns-6d4b75cb6d-xtcdj"
	Sep 24 19:13:23 running-upgrade-070000 kubelet[12495]: I0924 19:13:23.209229   12495 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv5vh\" (UniqueName: \"kubernetes.io/projected/ddb0eb09-d7b1-4830-8f48-1196ea03f9e8-kube-api-access-tv5vh\") pod \"coredns-6d4b75cb6d-xtcdj\" (UID: \"ddb0eb09-d7b1-4830-8f48-1196ea03f9e8\") " pod="kube-system/coredns-6d4b75cb6d-xtcdj"
	Sep 24 19:17:02 running-upgrade-070000 kubelet[12495]: I0924 19:17:02.085021   12495 scope.go:110] "RemoveContainer" containerID="77dfe0886a80e91b5303cea084a47653dd4e07788dd7fadbbcd4d6f7ca34c1c1"
	Sep 24 19:17:02 running-upgrade-070000 kubelet[12495]: I0924 19:17:02.105861   12495 scope.go:110] "RemoveContainer" containerID="d70eedf42cf6204221f3bd74c18a10d02200883020074490b5bce0211467d7ce"
	
	
	==> storage-provisioner [1893b5bb7145] <==
	I0924 19:13:23.966077       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 19:13:23.974437       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 19:13:23.974456       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 19:13:23.978430       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 19:13:23.978660       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-070000_71e01d0b-b9fe-4e9f-8956-7fff181bd6c0!
	I0924 19:13:23.979046       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11b8270c-5d3c-4cf3-a5ab-c74431664aa8", APIVersion:"v1", ResourceVersion:"384", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-070000_71e01d0b-b9fe-4e9f-8956-7fff181bd6c0 became leader
	I0924 19:13:24.079741       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-070000_71e01d0b-b9fe-4e9f-8956-7fff181bd6c0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-070000 -n running-upgrade-070000
E0924 12:17:31.139590    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 12:17:32.025166    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-070000 -n running-upgrade-070000: exit status 2 (15.669336417s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-070000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-070000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-070000
--- FAIL: TestRunningBinaryUpgrade (598.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-799000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-799000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.888333458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-799000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-799000" primary control-plane node in "kubernetes-upgrade-799000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-799000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:10:44.936244    4453 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:10:44.936379    4453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:10:44.936383    4453 out.go:358] Setting ErrFile to fd 2...
	I0924 12:10:44.936385    4453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:10:44.936515    4453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:10:44.937613    4453 out.go:352] Setting JSON to false
	I0924 12:10:44.954057    4453 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4215,"bootTime":1727200829,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:10:44.954125    4453 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:10:44.961004    4453 out.go:177] * [kubernetes-upgrade-799000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:10:44.968738    4453 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:10:44.968831    4453 notify.go:220] Checking for updates...
	I0924 12:10:44.974793    4453 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:10:44.977757    4453 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:10:44.980760    4453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:10:44.983844    4453 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:10:44.986666    4453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:10:44.990159    4453 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:10:44.990224    4453 config.go:182] Loaded profile config "running-upgrade-070000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:10:44.990279    4453 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:10:44.994707    4453 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:10:45.001764    4453 start.go:297] selected driver: qemu2
	I0924 12:10:45.001770    4453 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:10:45.001775    4453 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:10:45.004093    4453 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:10:45.006808    4453 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:10:45.008162    4453 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 12:10:45.008189    4453 cni.go:84] Creating CNI manager for ""
	I0924 12:10:45.008218    4453 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0924 12:10:45.008246    4453 start.go:340] cluster config:
	{Name:kubernetes-upgrade-799000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-799000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:10:45.012067    4453 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:10:45.018784    4453 out.go:177] * Starting "kubernetes-upgrade-799000" primary control-plane node in "kubernetes-upgrade-799000" cluster
	I0924 12:10:45.022787    4453 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 12:10:45.022803    4453 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0924 12:10:45.022814    4453 cache.go:56] Caching tarball of preloaded images
	I0924 12:10:45.022900    4453 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:10:45.022910    4453 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0924 12:10:45.022958    4453 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/kubernetes-upgrade-799000/config.json ...
	I0924 12:10:45.022973    4453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/kubernetes-upgrade-799000/config.json: {Name:mk36595c7c76504e224b856432e2f3af698752b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:10:45.023289    4453 start.go:360] acquireMachinesLock for kubernetes-upgrade-799000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:10:45.023329    4453 start.go:364] duration metric: took 31.5µs to acquireMachinesLock for "kubernetes-upgrade-799000"
	I0924 12:10:45.023341    4453 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-799000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-799000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:10:45.023374    4453 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:10:45.031643    4453 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:10:45.047067    4453 start.go:159] libmachine.API.Create for "kubernetes-upgrade-799000" (driver="qemu2")
	I0924 12:10:45.047100    4453 client.go:168] LocalClient.Create starting
	I0924 12:10:45.047167    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:10:45.047204    4453 main.go:141] libmachine: Decoding PEM data...
	I0924 12:10:45.047212    4453 main.go:141] libmachine: Parsing certificate...
	I0924 12:10:45.047252    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:10:45.047276    4453 main.go:141] libmachine: Decoding PEM data...
	I0924 12:10:45.047285    4453 main.go:141] libmachine: Parsing certificate...
	I0924 12:10:45.047631    4453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:10:45.212412    4453 main.go:141] libmachine: Creating SSH key...
	I0924 12:10:45.255795    4453 main.go:141] libmachine: Creating Disk image...
	I0924 12:10:45.255800    4453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:10:45.255991    4453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2
	I0924 12:10:45.265260    4453 main.go:141] libmachine: STDOUT: 
	I0924 12:10:45.265275    4453 main.go:141] libmachine: STDERR: 
	I0924 12:10:45.265325    4453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2 +20000M
	I0924 12:10:45.273448    4453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:10:45.273465    4453 main.go:141] libmachine: STDERR: 
	I0924 12:10:45.273479    4453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2
	I0924 12:10:45.273482    4453 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:10:45.273495    4453 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:10:45.273526    4453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:02:e2:f2:84:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2
	I0924 12:10:45.275154    4453 main.go:141] libmachine: STDOUT: 
	I0924 12:10:45.275169    4453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:10:45.275191    4453 client.go:171] duration metric: took 228.083583ms to LocalClient.Create
	I0924 12:10:47.277365    4453 start.go:128] duration metric: took 2.2539705s to createHost
	I0924 12:10:47.277436    4453 start.go:83] releasing machines lock for "kubernetes-upgrade-799000", held for 2.254110709s
	W0924 12:10:47.277553    4453 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:10:47.288527    4453 out.go:177] * Deleting "kubernetes-upgrade-799000" in qemu2 ...
	W0924 12:10:47.314710    4453 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:10:47.314739    4453 start.go:729] Will try again in 5 seconds ...
	I0924 12:10:52.316970    4453 start.go:360] acquireMachinesLock for kubernetes-upgrade-799000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:10:52.317401    4453 start.go:364] duration metric: took 318.834µs to acquireMachinesLock for "kubernetes-upgrade-799000"
	I0924 12:10:52.317446    4453 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-799000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-799000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:10:52.317622    4453 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:10:52.326720    4453 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:10:52.370879    4453 start.go:159] libmachine.API.Create for "kubernetes-upgrade-799000" (driver="qemu2")
	I0924 12:10:52.370924    4453 client.go:168] LocalClient.Create starting
	I0924 12:10:52.371054    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:10:52.371127    4453 main.go:141] libmachine: Decoding PEM data...
	I0924 12:10:52.371143    4453 main.go:141] libmachine: Parsing certificate...
	I0924 12:10:52.371194    4453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:10:52.371243    4453 main.go:141] libmachine: Decoding PEM data...
	I0924 12:10:52.371254    4453 main.go:141] libmachine: Parsing certificate...
	I0924 12:10:52.371874    4453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:10:52.548830    4453 main.go:141] libmachine: Creating SSH key...
	I0924 12:10:52.729425    4453 main.go:141] libmachine: Creating Disk image...
	I0924 12:10:52.729433    4453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:10:52.729644    4453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2
	I0924 12:10:52.739287    4453 main.go:141] libmachine: STDOUT: 
	I0924 12:10:52.739305    4453 main.go:141] libmachine: STDERR: 
	I0924 12:10:52.739372    4453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2 +20000M
	I0924 12:10:52.747387    4453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:10:52.747403    4453 main.go:141] libmachine: STDERR: 
	I0924 12:10:52.747416    4453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2
	I0924 12:10:52.747425    4453 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:10:52.747440    4453 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:10:52.747466    4453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:bc:a1:a7:17:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2
	I0924 12:10:52.749118    4453 main.go:141] libmachine: STDOUT: 
	I0924 12:10:52.749132    4453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:10:52.749145    4453 client.go:171] duration metric: took 378.218333ms to LocalClient.Create
	I0924 12:10:54.751309    4453 start.go:128] duration metric: took 2.433598s to createHost
	I0924 12:10:54.751347    4453 start.go:83] releasing machines lock for "kubernetes-upgrade-799000", held for 2.433945791s
	W0924 12:10:54.751538    4453 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-799000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-799000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:10:54.770030    4453 out.go:201] 
	W0924 12:10:54.772982    4453 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:10:54.772996    4453 out.go:270] * 
	* 
	W0924 12:10:54.774341    4453 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:10:54.785894    4453 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-799000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-799000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-799000: (2.004835333s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-799000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-799000 status --format={{.Host}}: exit status 7 (48.758917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-799000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-799000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.181571834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-799000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-799000" primary control-plane node in "kubernetes-upgrade-799000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-799000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-799000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:10:56.881999    4482 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:10:56.882165    4482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:10:56.882170    4482 out.go:358] Setting ErrFile to fd 2...
	I0924 12:10:56.882172    4482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:10:56.882307    4482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:10:56.883398    4482 out.go:352] Setting JSON to false
	I0924 12:10:56.900737    4482 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4227,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:10:56.900808    4482 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:10:56.905899    4482 out.go:177] * [kubernetes-upgrade-799000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:10:56.913019    4482 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:10:56.913096    4482 notify.go:220] Checking for updates...
	I0924 12:10:56.919905    4482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:10:56.923971    4482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:10:56.927036    4482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:10:56.929977    4482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:10:56.933012    4482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:10:56.936203    4482 config.go:182] Loaded profile config "kubernetes-upgrade-799000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0924 12:10:56.936456    4482 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:10:56.940028    4482 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:10:56.946900    4482 start.go:297] selected driver: qemu2
	I0924 12:10:56.946907    4482 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-799000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-799000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:10:56.946958    4482 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:10:56.949368    4482 cni.go:84] Creating CNI manager for ""
	I0924 12:10:56.949398    4482 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:10:56.949415    4482 start.go:340] cluster config:
	{Name:kubernetes-upgrade-799000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-799000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:10:56.952645    4482 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:10:56.957142    4482 out.go:177] * Starting "kubernetes-upgrade-799000" primary control-plane node in "kubernetes-upgrade-799000" cluster
	I0924 12:10:56.960955    4482 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:10:56.960969    4482 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:10:56.960975    4482 cache.go:56] Caching tarball of preloaded images
	I0924 12:10:56.961034    4482 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:10:56.961040    4482 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:10:56.961102    4482 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/kubernetes-upgrade-799000/config.json ...
	I0924 12:10:56.961564    4482 start.go:360] acquireMachinesLock for kubernetes-upgrade-799000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:10:56.961600    4482 start.go:364] duration metric: took 30.084µs to acquireMachinesLock for "kubernetes-upgrade-799000"
	I0924 12:10:56.961609    4482 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:10:56.961614    4482 fix.go:54] fixHost starting: 
	I0924 12:10:56.961716    4482 fix.go:112] recreateIfNeeded on kubernetes-upgrade-799000: state=Stopped err=<nil>
	W0924 12:10:56.961725    4482 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:10:56.969984    4482 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-799000" ...
	I0924 12:10:56.973962    4482 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:10:56.973995    4482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:bc:a1:a7:17:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2
	I0924 12:10:56.975931    4482 main.go:141] libmachine: STDOUT: 
	I0924 12:10:56.975947    4482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:10:56.975975    4482 fix.go:56] duration metric: took 14.359209ms for fixHost
	I0924 12:10:56.975979    4482 start.go:83] releasing machines lock for "kubernetes-upgrade-799000", held for 14.374625ms
	W0924 12:10:56.975985    4482 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:10:56.976017    4482 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:10:56.976021    4482 start.go:729] Will try again in 5 seconds ...
	I0924 12:11:01.978177    4482 start.go:360] acquireMachinesLock for kubernetes-upgrade-799000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:11:01.978703    4482 start.go:364] duration metric: took 439.583µs to acquireMachinesLock for "kubernetes-upgrade-799000"
	I0924 12:11:01.978879    4482 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:11:01.978900    4482 fix.go:54] fixHost starting: 
	I0924 12:11:01.979687    4482 fix.go:112] recreateIfNeeded on kubernetes-upgrade-799000: state=Stopped err=<nil>
	W0924 12:11:01.979714    4482 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:11:01.982462    4482 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-799000" ...
	I0924 12:11:01.990212    4482 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:11:01.990415    4482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:bc:a1:a7:17:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubernetes-upgrade-799000/disk.qcow2
	I0924 12:11:02.000011    4482 main.go:141] libmachine: STDOUT: 
	I0924 12:11:02.000067    4482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:11:02.000138    4482 fix.go:56] duration metric: took 21.242625ms for fixHost
	I0924 12:11:02.000154    4482 start.go:83] releasing machines lock for "kubernetes-upgrade-799000", held for 21.429708ms
	W0924 12:11:02.000354    4482 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-799000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-799000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:11:02.006226    4482 out.go:201] 
	W0924 12:11:02.009335    4482 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:11:02.009359    4482 out.go:270] * 
	* 
	W0924 12:11:02.012018    4482 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:11:02.019157    4482 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-799000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-799000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-799000 version --output=json: exit status 1 (62.945209ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-799000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-24 12:11:02.097003 -0700 PDT m=+3138.033782001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-799000 -n kubernetes-upgrade-799000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-799000 -n kubernetes-upgrade-799000: exit status 7 (33.345583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-799000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-799000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-799000
--- FAIL: TestKubernetesUpgrade (17.30s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19700
- KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2660221343/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.09s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19700
- KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2997625534/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.710665850 start -p stopped-upgrade-164000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.710665850 start -p stopped-upgrade-164000 --memory=2200 --vm-driver=qemu2 : (39.799203334s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.710665850 -p stopped-upgrade-164000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.710665850 -p stopped-upgrade-164000 stop: (12.132467333s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-164000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0924 12:12:31.141638    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 12:12:32.025928    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 12:15:35.116498    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-164000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.379812958s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-164000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-164000" primary control-plane node in "stopped-upgrade-164000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-164000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:11:55.715126    4520 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:11:55.715296    4520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:11:55.715300    4520 out.go:358] Setting ErrFile to fd 2...
	I0924 12:11:55.715304    4520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:11:55.715463    4520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:11:55.716803    4520 out.go:352] Setting JSON to false
	I0924 12:11:55.736340    4520 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4286,"bootTime":1727200829,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:11:55.736416    4520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:11:55.740032    4520 out.go:177] * [stopped-upgrade-164000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:11:55.748751    4520 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:11:55.748814    4520 notify.go:220] Checking for updates...
	I0924 12:11:55.754710    4520 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:11:55.757658    4520 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:11:55.760614    4520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:11:55.763681    4520 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:11:55.766674    4520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:11:55.769886    4520 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:11:55.773667    4520 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 12:11:55.776669    4520 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:11:55.779731    4520 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:11:55.786634    4520 start.go:297] selected driver: qemu2
	I0924 12:11:55.786640    4520 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50530 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0924 12:11:55.786687    4520 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:11:55.788844    4520 cni.go:84] Creating CNI manager for ""
	I0924 12:11:55.788880    4520 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:11:55.788909    4520 start.go:340] cluster config:
	{Name:stopped-upgrade-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50530 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0924 12:11:55.788967    4520 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:11:55.796668    4520 out.go:177] * Starting "stopped-upgrade-164000" primary control-plane node in "stopped-upgrade-164000" cluster
	I0924 12:11:55.800693    4520 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0924 12:11:55.800708    4520 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0924 12:11:55.800716    4520 cache.go:56] Caching tarball of preloaded images
	I0924 12:11:55.800779    4520 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:11:55.800785    4520 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0924 12:11:55.800847    4520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/config.json ...
	I0924 12:11:55.801159    4520 start.go:360] acquireMachinesLock for stopped-upgrade-164000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:11:55.801190    4520 start.go:364] duration metric: took 25.833µs to acquireMachinesLock for "stopped-upgrade-164000"
	I0924 12:11:55.801200    4520 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:11:55.801205    4520 fix.go:54] fixHost starting: 
	I0924 12:11:55.801308    4520 fix.go:112] recreateIfNeeded on stopped-upgrade-164000: state=Stopped err=<nil>
	W0924 12:11:55.801316    4520 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:11:55.809701    4520 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-164000" ...
	I0924 12:11:55.813605    4520 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:11:55.813678    4520 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50495-:22,hostfwd=tcp::50496-:2376,hostname=stopped-upgrade-164000 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/disk.qcow2
	I0924 12:11:55.858272    4520 main.go:141] libmachine: STDOUT: 
	I0924 12:11:55.858295    4520 main.go:141] libmachine: STDERR: 
	I0924 12:11:55.858301    4520 main.go:141] libmachine: Waiting for VM to start (ssh -p 50495 docker@127.0.0.1)...
	I0924 12:12:16.230822    4520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/config.json ...
	I0924 12:12:16.231649    4520 machine.go:93] provisionDockerMachine start ...
	I0924 12:12:16.231988    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.232511    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.232526    4520 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 12:12:16.330859    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0924 12:12:16.330910    4520 buildroot.go:166] provisioning hostname "stopped-upgrade-164000"
	I0924 12:12:16.331078    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.331349    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.331365    4520 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-164000 && echo "stopped-upgrade-164000" | sudo tee /etc/hostname
	I0924 12:12:16.421228    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-164000
	
	I0924 12:12:16.421358    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.421575    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.421589    4520 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-164000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-164000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-164000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 12:12:16.505024    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 12:12:16.505040    4520 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19700-1081/.minikube CaCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19700-1081/.minikube}
	I0924 12:12:16.505056    4520 buildroot.go:174] setting up certificates
	I0924 12:12:16.505063    4520 provision.go:84] configureAuth start
	I0924 12:12:16.505072    4520 provision.go:143] copyHostCerts
	I0924 12:12:16.505188    4520 exec_runner.go:144] found /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.pem, removing ...
	I0924 12:12:16.505198    4520 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.pem
	I0924 12:12:16.505364    4520 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.pem (1078 bytes)
	I0924 12:12:16.505616    4520 exec_runner.go:144] found /Users/jenkins/minikube-integration/19700-1081/.minikube/cert.pem, removing ...
	I0924 12:12:16.505621    4520 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19700-1081/.minikube/cert.pem
	I0924 12:12:16.505700    4520 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/cert.pem (1123 bytes)
	I0924 12:12:16.505851    4520 exec_runner.go:144] found /Users/jenkins/minikube-integration/19700-1081/.minikube/key.pem, removing ...
	I0924 12:12:16.505858    4520 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19700-1081/.minikube/key.pem
	I0924 12:12:16.505929    4520 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19700-1081/.minikube/key.pem (1675 bytes)
	I0924 12:12:16.506079    4520 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-164000 san=[127.0.0.1 localhost minikube stopped-upgrade-164000]
	I0924 12:12:16.600564    4520 provision.go:177] copyRemoteCerts
	I0924 12:12:16.600607    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 12:12:16.600615    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	I0924 12:12:16.638430    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 12:12:16.644727    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0924 12:12:16.651018    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 12:12:16.658269    4520 provision.go:87] duration metric: took 153.209167ms to configureAuth
	I0924 12:12:16.658279    4520 buildroot.go:189] setting minikube options for container-runtime
	I0924 12:12:16.658379    4520 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:12:16.658430    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.658527    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.658532    4520 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0924 12:12:16.727716    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0924 12:12:16.727725    4520 buildroot.go:70] root file system type: tmpfs
	I0924 12:12:16.727775    4520 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0924 12:12:16.727844    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.727958    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.727991    4520 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0924 12:12:16.804669    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0924 12:12:16.804734    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:16.804853    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:16.804863    4520 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0924 12:12:17.182191    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0924 12:12:17.182208    4520 machine.go:96] duration metric: took 950.628875ms to provisionDockerMachine
	I0924 12:12:17.182215    4520 start.go:293] postStartSetup for "stopped-upgrade-164000" (driver="qemu2")
	I0924 12:12:17.182222    4520 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 12:12:17.182285    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 12:12:17.182297    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	I0924 12:12:17.219711    4520 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 12:12:17.221138    4520 info.go:137] Remote host: Buildroot 2021.02.12
	I0924 12:12:17.221147    4520 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19700-1081/.minikube/addons for local assets ...
	I0924 12:12:17.221232    4520 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19700-1081/.minikube/files for local assets ...
	I0924 12:12:17.221361    4520 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem -> 15982.pem in /etc/ssl/certs
	I0924 12:12:17.221490    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 12:12:17.223972    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem --> /etc/ssl/certs/15982.pem (1708 bytes)
	I0924 12:12:17.230995    4520 start.go:296] duration metric: took 48.778458ms for postStartSetup
	I0924 12:12:17.231009    4520 fix.go:56] duration metric: took 21.43352425s for fixHost
	I0924 12:12:17.231046    4520 main.go:141] libmachine: Using SSH client type: native
	I0924 12:12:17.231153    4520 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105095c00] 0x105098440 <nil>  [] 0s} localhost 50495 <nil> <nil>}
	I0924 12:12:17.231159    4520 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0924 12:12:17.301300    4520 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727205137.661267504
	
	I0924 12:12:17.301309    4520 fix.go:216] guest clock: 1727205137.661267504
	I0924 12:12:17.301313    4520 fix.go:229] Guest: 2024-09-24 12:12:17.661267504 -0700 PDT Remote: 2024-09-24 12:12:17.231012 -0700 PDT m=+21.551961459 (delta=430.255504ms)
	I0924 12:12:17.301330    4520 fix.go:200] guest clock delta is within tolerance: 430.255504ms
	I0924 12:12:17.301337    4520 start.go:83] releasing machines lock for "stopped-upgrade-164000", held for 21.50386675s
	I0924 12:12:17.301404    4520 ssh_runner.go:195] Run: cat /version.json
	I0924 12:12:17.301413    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	I0924 12:12:17.301404    4520 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 12:12:17.301443    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	W0924 12:12:17.302036    4520 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50495: connect: connection refused
	I0924 12:12:17.302053    4520 retry.go:31] will retry after 264.324012ms: dial tcp [::1]:50495: connect: connection refused
	W0924 12:12:17.337243    4520 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0924 12:12:17.337299    4520 ssh_runner.go:195] Run: systemctl --version
	I0924 12:12:17.339127    4520 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0924 12:12:17.340801    4520 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 12:12:17.340827    4520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0924 12:12:17.343499    4520 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0924 12:12:17.348259    4520 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0924 12:12:17.348268    4520 start.go:495] detecting cgroup driver to use...
	I0924 12:12:17.348353    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 12:12:17.355789    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0924 12:12:17.359138    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0924 12:12:17.362539    4520 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0924 12:12:17.362568    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0924 12:12:17.365604    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 12:12:17.368241    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0924 12:12:17.371205    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 12:12:17.374467    4520 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 12:12:17.377505    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0924 12:12:17.380190    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0924 12:12:17.383329    4520 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0924 12:12:17.386597    4520 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 12:12:17.389317    4520 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 12:12:17.391786    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:17.472486    4520 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0924 12:12:17.482488    4520 start.go:495] detecting cgroup driver to use...
	I0924 12:12:17.482557    4520 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0924 12:12:17.488042    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 12:12:17.492767    4520 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0924 12:12:17.503495    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0924 12:12:17.508605    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 12:12:17.513182    4520 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0924 12:12:17.562785    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 12:12:17.568376    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 12:12:17.574869    4520 ssh_runner.go:195] Run: which cri-dockerd
	I0924 12:12:17.576377    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0924 12:12:17.579044    4520 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0924 12:12:17.583821    4520 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0924 12:12:17.666465    4520 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0924 12:12:17.740787    4520 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0924 12:12:17.740852    4520 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0924 12:12:17.745934    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:17.823743    4520 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0924 12:12:18.971674    4520 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.148005792s)
	I0924 12:12:18.971759    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0924 12:12:18.976550    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 12:12:18.981154    4520 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0924 12:12:19.063354    4520 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0924 12:12:19.146624    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:19.227935    4520 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0924 12:12:19.233852    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 12:12:19.238010    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:19.314534    4520 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0924 12:12:19.354508    4520 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0924 12:12:19.354604    4520 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0924 12:12:19.356771    4520 start.go:563] Will wait 60s for crictl version
	I0924 12:12:19.356824    4520 ssh_runner.go:195] Run: which crictl
	I0924 12:12:19.358712    4520 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 12:12:19.373507    4520 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0924 12:12:19.373588    4520 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0924 12:12:19.389662    4520 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0924 12:12:19.406396    4520 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0924 12:12:19.406477    4520 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0924 12:12:19.407867    4520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 12:12:19.411581    4520 kubeadm.go:883] updating cluster {Name:stopped-upgrade-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50530 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0924 12:12:19.411632    4520 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0924 12:12:19.411686    4520 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0924 12:12:19.422035    4520 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0924 12:12:19.422047    4520 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0924 12:12:19.422099    4520 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0924 12:12:19.425421    4520 ssh_runner.go:195] Run: which lz4
	I0924 12:12:19.426723    4520 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0924 12:12:19.428016    4520 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0924 12:12:19.428027    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0924 12:12:20.387123    4520 docker.go:649] duration metric: took 960.504084ms to copy over tarball
	I0924 12:12:20.387201    4520 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0924 12:12:21.543486    4520 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156345792s)
	I0924 12:12:21.543499    4520 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0924 12:12:21.559324    4520 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0924 12:12:21.563291    4520 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0924 12:12:21.568788    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:21.652658    4520 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0924 12:12:23.277241    4520 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.624668084s)
	I0924 12:12:23.277361    4520 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0924 12:12:23.291024    4520 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0924 12:12:23.291033    4520 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0924 12:12:23.291038    4520 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0924 12:12:23.295566    4520 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:12:23.297433    4520 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:12:23.299298    4520 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:12:23.299343    4520 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0924 12:12:23.301911    4520 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:12:23.301922    4520 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:12:23.303378    4520 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:12:23.303412    4520 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0924 12:12:23.304283    4520 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:12:23.305046    4520 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:12:23.305830    4520 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:12:23.307145    4520 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:12:23.307219    4520 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:12:23.307270    4520 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:12:23.308587    4520 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:12:23.309372    4520 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:12:23.717339    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0924 12:12:23.731407    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0924 12:12:23.732988    4520 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0924 12:12:23.733008    4520 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0924 12:12:23.733048    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0924 12:12:23.745380    4520 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0924 12:12:23.745400    4520 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0924 12:12:23.745476    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0924 12:12:23.749998    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0924 12:12:23.750115    4520 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W0924 12:12:23.757718    4520 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0924 12:12:23.758010    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:12:23.760566    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0924 12:12:23.760620    4520 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0924 12:12:23.760633    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0924 12:12:23.760691    4520 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0924 12:12:23.766428    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:12:23.771753    4520 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0924 12:12:23.771774    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0924 12:12:23.774999    4520 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0924 12:12:23.775026    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0924 12:12:23.775095    4520 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0924 12:12:23.775110    4520 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:12:23.775152    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0924 12:12:23.788360    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:12:23.788651    4520 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0924 12:12:23.788670    4520 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:12:23.789166    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0924 12:12:23.789920    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:12:23.807115    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:12:23.856101    4520 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0924 12:12:23.856178    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0924 12:12:23.856293    4520 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0924 12:12:23.856304    4520 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0924 12:12:23.856312    4520 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:12:23.856362    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0924 12:12:23.856362    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0924 12:12:23.856391    4520 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0924 12:12:23.856400    4520 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0924 12:12:23.856404    4520 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:12:23.856412    4520 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:12:23.856439    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0924 12:12:23.856446    4520 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0924 12:12:23.885025    4520 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0924 12:12:23.885063    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0924 12:12:23.885258    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0924 12:12:23.910869    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0924 12:12:23.910940    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0924 12:12:23.979884    4520 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0924 12:12:23.979897    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0924 12:12:24.099890    4520 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0924 12:12:24.133084    4520 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0924 12:12:24.133215    4520 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:12:24.156582    4520 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0924 12:12:24.156598    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0924 12:12:24.158778    4520 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0924 12:12:24.158800    4520 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:12:24.158877    4520 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:12:24.302688    4520 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0924 12:12:24.302711    4520 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0924 12:12:24.302832    4520 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0924 12:12:24.304294    4520 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0924 12:12:24.304309    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0924 12:12:24.334231    4520 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0924 12:12:24.334246    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0924 12:12:24.563903    4520 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0924 12:12:24.563941    4520 cache_images.go:92] duration metric: took 1.272967708s to LoadCachedImages
	W0924 12:12:24.563990    4520 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0924 12:12:24.563995    4520 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0924 12:12:24.564058    4520 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-164000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 12:12:24.564142    4520 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0924 12:12:24.580635    4520 cni.go:84] Creating CNI manager for ""
	I0924 12:12:24.580648    4520 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:12:24.580655    4520 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 12:12:24.580664    4520 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-164000 NodeName:stopped-upgrade-164000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 12:12:24.580777    4520 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-164000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 12:12:24.580842    4520 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0924 12:12:24.583514    4520 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 12:12:24.583547    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 12:12:24.586524    4520 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0924 12:12:24.591396    4520 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 12:12:24.596401    4520 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0924 12:12:24.601181    4520 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0924 12:12:24.602316    4520 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 12:12:24.606345    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:12:24.687486    4520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 12:12:24.697182    4520 certs.go:68] Setting up /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000 for IP: 10.0.2.15
	I0924 12:12:24.697203    4520 certs.go:194] generating shared ca certs ...
	I0924 12:12:24.697212    4520 certs.go:226] acquiring lock for ca certs: {Name:mk724855f1a91a4bb17b52053043bbe8bd1cc119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:12:24.697401    4520 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.key
	I0924 12:12:24.697455    4520 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.key
	I0924 12:12:24.697466    4520 certs.go:256] generating profile certs ...
	I0924 12:12:24.697546    4520 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/client.key
	I0924 12:12:24.697564    4520 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key.c66f4644
	I0924 12:12:24.697573    4520 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt.c66f4644 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0924 12:12:24.796229    4520 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt.c66f4644 ...
	I0924 12:12:24.796242    4520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt.c66f4644: {Name:mk5e28e38bebb807ecccc0831fd829c1d304600a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:12:24.796837    4520 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key.c66f4644 ...
	I0924 12:12:24.796843    4520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key.c66f4644: {Name:mk57cd1eea0ad6d7324af174a11b28aa7e9feacd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:12:24.797008    4520 certs.go:381] copying /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt.c66f4644 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt
	I0924 12:12:24.797184    4520 certs.go:385] copying /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key.c66f4644 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key
	I0924 12:12:24.797350    4520 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/proxy-client.key
	I0924 12:12:24.797502    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/1598.pem (1338 bytes)
	W0924 12:12:24.797531    4520 certs.go:480] ignoring /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/1598_empty.pem, impossibly tiny 0 bytes
	I0924 12:12:24.797537    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 12:12:24.797564    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem (1078 bytes)
	I0924 12:12:24.797588    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem (1123 bytes)
	I0924 12:12:24.797617    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/key.pem (1675 bytes)
	I0924 12:12:24.797670    4520 certs.go:484] found cert: /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem (1708 bytes)
	I0924 12:12:24.797983    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 12:12:24.805042    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 12:12:24.811807    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 12:12:24.818636    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 12:12:24.825997    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 12:12:24.833220    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0924 12:12:24.839904    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 12:12:24.846672    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 12:12:24.854057    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 12:12:24.860801    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/1598.pem --> /usr/share/ca-certificates/1598.pem (1338 bytes)
	I0924 12:12:24.867213    4520 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/ssl/certs/15982.pem --> /usr/share/ca-certificates/15982.pem (1708 bytes)
	I0924 12:12:24.874261    4520 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 12:12:24.879539    4520 ssh_runner.go:195] Run: openssl version
	I0924 12:12:24.881504    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15982.pem && ln -fs /usr/share/ca-certificates/15982.pem /etc/ssl/certs/15982.pem"
	I0924 12:12:24.884559    4520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15982.pem
	I0924 12:12:24.885875    4520 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 18:35 /usr/share/ca-certificates/15982.pem
	I0924 12:12:24.885897    4520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15982.pem
	I0924 12:12:24.887589    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15982.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 12:12:24.890846    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 12:12:24.894280    4520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 12:12:24.895871    4520 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 18:19 /usr/share/ca-certificates/minikubeCA.pem
	I0924 12:12:24.895905    4520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 12:12:24.897895    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 12:12:24.901114    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1598.pem && ln -fs /usr/share/ca-certificates/1598.pem /etc/ssl/certs/1598.pem"
	I0924 12:12:24.903993    4520 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1598.pem
	I0924 12:12:24.905265    4520 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 18:35 /usr/share/ca-certificates/1598.pem
	I0924 12:12:24.905291    4520 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1598.pem
	I0924 12:12:24.907014    4520 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1598.pem /etc/ssl/certs/51391683.0"
	I0924 12:12:24.910059    4520 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 12:12:24.911363    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 12:12:24.913168    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 12:12:24.914962    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 12:12:24.917094    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 12:12:24.918749    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 12:12:24.920619    4520 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 12:12:24.922271    4520 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50530 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0924 12:12:24.922347    4520 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0924 12:12:24.940746    4520 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 12:12:24.943632    4520 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 12:12:24.943642    4520 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 12:12:24.943663    4520 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 12:12:24.948085    4520 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 12:12:24.948398    4520 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-164000" does not appear in /Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:12:24.948504    4520 kubeconfig.go:62] /Users/jenkins/minikube-integration/19700-1081/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-164000" cluster setting kubeconfig missing "stopped-upgrade-164000" context setting]
	I0924 12:12:24.948704    4520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/kubeconfig: {Name:mk406b8f0f5e016c0aa63af8364801bb91be8bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:12:24.949168    4520 kapi.go:59] client config for stopped-upgrade-164000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/client.key", CAFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10666e030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 12:12:24.949512    4520 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 12:12:24.952290    4520 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-164000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0924 12:12:24.952297    4520 kubeadm.go:1160] stopping kube-system containers ...
	I0924 12:12:24.952348    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0924 12:12:24.966014    4520 docker.go:483] Stopping containers: [ea28f7380559 bb8ba6d324a9 0f96fd47fd94 089d88b4ee8a 876b9146846d 3b703291d050 918f102be99c 05293699e3a3]
	I0924 12:12:24.966084    4520 ssh_runner.go:195] Run: docker stop ea28f7380559 bb8ba6d324a9 0f96fd47fd94 089d88b4ee8a 876b9146846d 3b703291d050 918f102be99c 05293699e3a3
	I0924 12:12:24.976912    4520 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0924 12:12:24.982474    4520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 12:12:24.985656    4520 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 12:12:24.985662    4520 kubeadm.go:157] found existing configuration files:
	
	I0924 12:12:24.985688    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/admin.conf
	I0924 12:12:24.988289    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 12:12:24.988315    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 12:12:24.990932    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/kubelet.conf
	I0924 12:12:24.993962    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 12:12:24.993987    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 12:12:24.996655    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/controller-manager.conf
	I0924 12:12:24.999106    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 12:12:24.999130    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 12:12:25.002354    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/scheduler.conf
	I0924 12:12:25.005815    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 12:12:25.005867    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 12:12:25.009109    4520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 12:12:25.012160    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:12:25.034655    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:12:25.370575    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:12:25.494169    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:12:25.528438    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0924 12:12:25.552203    4520 api_server.go:52] waiting for apiserver process to appear ...
	I0924 12:12:25.552284    4520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:12:26.054596    4520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:12:26.554311    4520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:12:26.560406    4520 api_server.go:72] duration metric: took 1.008253042s to wait for apiserver process to appear ...
	I0924 12:12:26.560417    4520 api_server.go:88] waiting for apiserver healthz status ...
	I0924 12:12:26.560429    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:31.562283    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:31.562319    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:36.562400    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:36.562467    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:41.562750    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:41.562775    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:46.563504    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:46.563522    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:51.564062    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:51.564109    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:12:56.565030    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:12:56.565102    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:01.566323    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:01.566363    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:06.567767    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:06.567793    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:11.569575    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:11.569620    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:16.571914    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:16.571955    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:21.574139    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:21.574178    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:26.576462    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:26.576623    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:13:26.589941    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:13:26.590031    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:13:26.601501    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:13:26.601589    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:13:26.611834    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:13:26.611924    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:13:26.622382    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:13:26.622474    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:13:26.632735    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:13:26.632818    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:13:26.645579    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:13:26.645667    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:13:26.656413    4520 logs.go:276] 0 containers: []
	W0924 12:13:26.656426    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:13:26.656499    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:13:26.667797    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:13:26.667816    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:13:26.667820    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:13:26.680239    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:13:26.680254    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:13:26.705730    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:13:26.705747    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:13:26.719835    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:13:26.719845    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:13:26.731033    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:13:26.731044    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:13:26.813786    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:13:26.813800    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:13:26.854445    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:13:26.854461    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:13:26.868913    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:13:26.868928    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:13:26.880976    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:13:26.880987    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:13:26.898658    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:13:26.898669    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:13:26.912239    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:13:26.912250    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:13:26.923988    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:13:26.923999    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:13:26.928672    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:13:26.928681    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:13:26.943703    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:13:26.943712    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:13:26.959579    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:13:26.959590    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:13:26.971706    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:13:26.971722    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:13:26.997458    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:13:26.997468    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:13:29.538261    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:34.540664    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:34.540909    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:13:34.567073    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:13:34.567251    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:13:34.586995    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:13:34.587091    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:13:34.598301    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:13:34.598397    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:13:34.609060    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:13:34.609152    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:13:34.619261    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:13:34.619333    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:13:34.630146    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:13:34.630226    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:13:34.640676    4520 logs.go:276] 0 containers: []
	W0924 12:13:34.640688    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:13:34.640759    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:13:34.652195    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:13:34.652216    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:13:34.652222    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:13:34.664207    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:13:34.664218    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:13:34.669505    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:13:34.669513    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:13:34.710227    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:13:34.710242    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:13:34.722357    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:13:34.722366    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:13:34.745961    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:13:34.745969    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:13:34.759729    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:13:34.759739    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:13:34.775107    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:13:34.775123    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:13:34.787004    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:13:34.787014    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:13:34.802309    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:13:34.802319    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:13:34.814654    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:13:34.814669    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:13:34.832121    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:13:34.832136    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:13:34.850881    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:13:34.850892    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:13:34.889588    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:13:34.889602    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:13:34.903436    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:13:34.903451    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:13:34.914770    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:13:34.914784    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:13:34.951523    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:13:34.951541    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:13:37.469561    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:42.471933    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:42.472239    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:13:42.495104    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:13:42.495231    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:13:42.512856    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:13:42.512948    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:13:42.525617    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:13:42.525702    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:13:42.536454    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:13:42.536530    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:13:42.546964    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:13:42.547045    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:13:42.557256    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:13:42.557332    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:13:42.567612    4520 logs.go:276] 0 containers: []
	W0924 12:13:42.567625    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:13:42.567688    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:13:42.578032    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:13:42.578053    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:13:42.578058    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:13:42.623055    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:13:42.623070    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:13:42.637371    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:13:42.637383    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:13:42.677234    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:13:42.677246    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:13:42.691997    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:13:42.692009    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:13:42.730383    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:13:42.730394    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:13:42.741615    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:13:42.741627    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:13:42.756551    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:13:42.756568    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:13:42.781198    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:13:42.781205    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:13:42.797467    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:13:42.797480    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:13:42.809227    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:13:42.809241    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:13:42.813214    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:13:42.813220    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:13:42.827461    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:13:42.827476    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:13:42.842294    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:13:42.842308    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:13:42.854451    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:13:42.854465    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:13:42.867041    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:13:42.867056    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:13:42.885083    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:13:42.885092    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:13:45.399630    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:50.402052    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:50.402249    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:13:50.426580    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:13:50.426680    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:13:50.439623    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:13:50.439715    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:13:50.450637    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:13:50.450720    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:13:50.468695    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:13:50.468789    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:13:50.479240    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:13:50.479324    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:13:50.489876    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:13:50.489968    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:13:50.500422    4520 logs.go:276] 0 containers: []
	W0924 12:13:50.500437    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:13:50.500513    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:13:50.511472    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:13:50.511492    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:13:50.511497    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:13:50.522582    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:13:50.522595    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:13:50.540872    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:13:50.540881    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:13:50.544939    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:13:50.544944    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:13:50.558605    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:13:50.558619    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:13:50.569704    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:13:50.569717    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:13:50.593523    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:13:50.593530    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:13:50.627779    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:13:50.627789    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:13:50.645618    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:13:50.645628    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:13:50.663663    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:13:50.663676    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:13:50.685027    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:13:50.685038    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:13:50.699037    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:13:50.699053    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:13:50.713795    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:13:50.713810    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:13:50.725217    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:13:50.725232    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:13:50.761921    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:13:50.761930    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:13:50.798824    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:13:50.798838    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:13:50.813887    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:13:50.813900    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:13:53.335390    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:13:58.337585    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:13:58.337776    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:13:58.352737    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:13:58.352835    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:13:58.365167    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:13:58.365251    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:13:58.377369    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:13:58.377447    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:13:58.387606    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:13:58.387700    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:13:58.398212    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:13:58.398291    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:13:58.408918    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:13:58.408997    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:13:58.419115    4520 logs.go:276] 0 containers: []
	W0924 12:13:58.419127    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:13:58.419202    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:13:58.429500    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:13:58.429519    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:13:58.429524    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:13:58.446681    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:13:58.446697    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:13:58.457732    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:13:58.457741    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:13:58.472282    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:13:58.472292    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:13:58.510493    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:13:58.510510    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:13:58.524524    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:13:58.524533    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:13:58.538560    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:13:58.538574    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:13:58.564191    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:13:58.564199    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:13:58.575333    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:13:58.575348    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:13:58.590162    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:13:58.590175    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:13:58.601736    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:13:58.601752    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:13:58.637702    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:13:58.637714    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:13:58.650339    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:13:58.650351    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:13:58.662187    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:13:58.662199    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:13:58.673351    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:13:58.673367    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:13:58.713004    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:13:58.713015    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:13:58.717270    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:13:58.717276    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:01.239794    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:06.242018    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:06.242139    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:06.256104    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:06.256201    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:06.266788    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:06.266862    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:06.277257    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:06.277327    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:06.287558    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:06.287638    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:06.297984    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:06.298061    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:06.308516    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:06.308594    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:06.322974    4520 logs.go:276] 0 containers: []
	W0924 12:14:06.322985    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:06.323057    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:06.333458    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:06.333478    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:06.333483    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:06.346891    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:06.346901    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:06.360745    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:06.360756    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:06.374346    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:06.374358    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:06.388949    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:06.388960    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:06.393003    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:06.393013    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:06.411462    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:06.411472    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:06.423093    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:06.423103    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:06.445914    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:06.445921    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:06.459620    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:06.459631    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:06.497305    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:06.497317    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:06.508835    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:06.508849    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:06.530965    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:06.530976    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:06.542520    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:06.542531    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:06.554070    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:06.554082    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:06.593038    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:06.593047    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:06.627440    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:06.627449    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:09.141207    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:14.143516    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:14.143663    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:14.155843    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:14.155940    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:14.169312    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:14.169396    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:14.179729    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:14.179810    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:14.190587    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:14.190676    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:14.201596    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:14.201680    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:14.212457    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:14.212544    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:14.223731    4520 logs.go:276] 0 containers: []
	W0924 12:14:14.223741    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:14.223812    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:14.234295    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:14.234311    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:14.234317    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:14.273086    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:14.273097    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:14.299964    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:14.299976    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:14.313084    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:14.313094    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:14.324212    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:14.324222    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:14.336028    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:14.336040    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:14.352330    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:14.352341    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:14.363751    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:14.363762    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:14.375242    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:14.375252    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:14.411984    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:14.411994    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:14.425745    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:14.425755    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:14.443825    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:14.443838    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:14.455830    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:14.455844    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:14.467549    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:14.467560    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:14.471613    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:14.471619    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:14.505368    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:14.505382    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:14.526231    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:14.526241    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:17.053103    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:22.055431    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:22.055580    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:22.069844    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:22.069942    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:22.081021    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:22.081107    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:22.096634    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:22.096717    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:22.107912    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:22.108003    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:22.118479    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:22.118559    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:22.128667    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:22.128752    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:22.138934    4520 logs.go:276] 0 containers: []
	W0924 12:14:22.138945    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:22.139016    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:22.148990    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:22.149009    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:22.149014    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:22.186391    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:22.186405    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:22.202483    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:22.202497    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:22.218316    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:22.218331    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:22.229997    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:22.230009    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:22.234549    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:22.234557    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:22.273709    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:22.273725    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:22.288418    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:22.288431    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:22.300313    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:22.300328    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:22.315306    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:22.315320    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:22.327276    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:22.327290    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:22.338853    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:22.338867    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:22.362212    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:22.362221    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:22.399477    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:22.399486    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:22.410684    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:22.410696    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:22.428331    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:22.428342    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:22.442241    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:22.442251    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:24.955719    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:29.957922    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:29.958087    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:29.971817    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:29.971909    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:29.984534    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:29.984618    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:29.995226    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:29.995320    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:30.005761    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:30.005845    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:30.016148    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:30.016234    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:30.027559    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:30.027643    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:30.037545    4520 logs.go:276] 0 containers: []
	W0924 12:14:30.037560    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:30.037636    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:30.048022    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:30.048040    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:30.048046    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:30.062337    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:30.062348    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:30.073515    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:30.073526    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:30.084990    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:30.084999    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:30.104969    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:30.104980    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:30.116266    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:30.116278    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:30.154782    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:30.154793    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:30.159035    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:30.159041    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:30.193490    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:30.193507    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:30.231465    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:30.231475    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:30.243745    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:30.243761    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:30.265309    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:30.265326    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:30.277084    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:30.277098    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:30.291229    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:30.291239    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:30.305081    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:30.305096    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:30.318299    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:30.318314    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:30.329600    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:30.329611    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:32.855527    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:37.857232    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:37.857420    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:37.872165    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:37.872271    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:37.884017    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:37.884106    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:37.895034    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:37.895125    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:37.911882    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:37.911976    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:37.922982    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:37.923071    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:37.934582    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:37.934664    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:37.945146    4520 logs.go:276] 0 containers: []
	W0924 12:14:37.945157    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:37.945232    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:37.968050    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:37.968068    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:37.968073    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:37.979908    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:37.979924    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:37.991631    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:37.991646    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:38.004888    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:38.004902    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:38.017674    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:38.017685    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:38.057485    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:38.057495    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:38.071484    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:38.071494    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:38.085551    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:38.085566    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:38.097139    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:38.097151    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:38.101277    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:38.101287    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:38.137296    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:38.137310    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:38.154687    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:38.154698    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:38.166223    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:38.166235    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:38.181594    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:38.181604    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:38.200096    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:38.200106    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:38.224473    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:38.224485    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:38.238941    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:38.238955    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:40.781275    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:45.783669    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:45.784120    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:45.816861    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:45.817035    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:45.836468    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:45.836584    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:45.851584    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:45.851678    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:45.864415    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:45.864507    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:45.874989    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:45.875079    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:45.886152    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:45.886232    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:45.896788    4520 logs.go:276] 0 containers: []
	W0924 12:14:45.896800    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:45.896864    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:45.907989    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:45.908008    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:45.908014    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:45.924091    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:45.924104    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:45.936174    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:45.936191    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:45.950350    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:45.950364    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:45.962545    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:45.962556    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:45.967185    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:45.967195    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:45.981992    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:45.982005    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:45.994043    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:45.994055    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:46.006090    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:46.006102    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:46.018498    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:46.018509    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:46.057790    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:46.057804    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:46.072021    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:46.072032    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:46.089394    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:46.089407    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:46.104104    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:46.104115    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:46.141454    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:46.141468    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:46.155417    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:46.155431    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:46.179336    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:46.179343    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:48.715487    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:14:53.717948    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:14:53.718382    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:14:53.748113    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:14:53.748267    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:14:53.766542    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:14:53.766656    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:14:53.780958    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:14:53.781042    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:14:53.793006    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:14:53.793102    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:14:53.805696    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:14:53.805769    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:14:53.816637    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:14:53.816725    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:14:53.826802    4520 logs.go:276] 0 containers: []
	W0924 12:14:53.826814    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:14:53.826886    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:14:53.842095    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:14:53.842114    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:14:53.842120    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:14:53.856215    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:14:53.856226    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:14:53.871401    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:14:53.871416    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:14:53.910341    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:14:53.910350    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:14:53.946092    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:14:53.946107    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:14:53.959725    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:14:53.959741    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:14:53.971536    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:14:53.971547    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:14:54.010058    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:14:54.010070    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:14:54.022013    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:14:54.022028    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:14:54.033199    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:14:54.033211    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:14:54.048085    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:14:54.048098    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:14:54.059765    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:14:54.059781    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:14:54.070813    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:14:54.070828    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:14:54.075022    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:14:54.075028    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:14:54.088853    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:14:54.088863    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:14:54.113513    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:14:54.113526    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:14:54.131099    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:14:54.131110    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:14:56.646651    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:01.649238    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:01.649687    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:01.690230    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:01.690400    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:01.711386    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:01.711525    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:01.726807    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:01.726904    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:01.739382    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:01.739473    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:01.751493    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:01.751579    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:01.762584    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:01.762660    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:01.779748    4520 logs.go:276] 0 containers: []
	W0924 12:15:01.779760    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:01.779834    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:01.790333    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:01.790349    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:01.790355    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:01.829593    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:01.829607    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:01.841770    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:01.841782    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:01.853448    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:01.853458    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:01.864582    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:01.864594    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:01.868759    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:01.868768    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:01.883980    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:01.883990    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:01.899375    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:01.899386    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:01.920456    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:01.920468    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:01.945958    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:01.945969    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:01.957970    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:01.957981    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:01.996990    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:01.997008    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:02.033279    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:02.033290    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:02.048899    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:02.048912    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:02.064632    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:02.064647    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:02.077447    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:02.077460    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:02.094486    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:02.094496    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:04.611678    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:09.612893    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:09.613087    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:09.626330    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:09.626421    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:09.637082    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:09.637170    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:09.648327    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:09.648404    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:09.659178    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:09.659266    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:09.669783    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:09.669855    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:09.680388    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:09.680467    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:09.690756    4520 logs.go:276] 0 containers: []
	W0924 12:15:09.690770    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:09.690833    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:09.701060    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:09.701079    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:09.701085    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:09.740274    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:09.740283    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:09.777707    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:09.777718    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:09.792534    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:09.792547    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:09.803704    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:09.803717    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:09.816083    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:09.816094    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:09.838964    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:09.838979    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:09.843218    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:09.843224    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:09.860949    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:09.860960    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:09.873094    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:09.873105    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:09.910927    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:09.910943    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:09.925952    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:09.925968    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:09.941231    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:09.941243    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:09.957192    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:09.957206    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:09.972545    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:09.972557    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:09.984770    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:09.984781    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:09.998946    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:09.998958    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:12.512939    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:17.513479    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:17.513727    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:17.530794    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:17.530901    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:17.543246    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:17.543339    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:17.554395    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:17.554479    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:17.564993    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:17.565078    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:17.575425    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:17.575510    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:17.586060    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:17.586138    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:17.596268    4520 logs.go:276] 0 containers: []
	W0924 12:15:17.596280    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:17.596353    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:17.606317    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:17.606348    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:17.606354    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:17.610808    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:17.610814    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:17.624932    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:17.624946    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:17.638807    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:17.638817    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:17.652033    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:17.652044    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:17.690726    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:17.690736    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:17.725899    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:17.725913    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:17.740519    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:17.740529    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:17.756705    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:17.756720    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:17.779690    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:17.779700    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:17.791254    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:17.791265    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:17.805989    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:17.806000    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:17.817682    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:17.817693    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:17.857642    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:17.857657    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:17.877153    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:17.877163    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:17.894468    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:17.894479    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:17.916736    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:17.916751    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:20.430168    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:25.432524    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:25.432778    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:25.450117    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:25.450228    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:25.462745    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:25.462836    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:25.473521    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:25.473593    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:25.485391    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:25.485474    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:25.496362    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:25.496451    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:25.506998    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:25.507084    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:25.517398    4520 logs.go:276] 0 containers: []
	W0924 12:15:25.517411    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:25.517475    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:25.531711    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:25.531730    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:25.531735    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:25.546877    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:25.546892    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:25.561133    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:25.561148    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:25.572095    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:25.572106    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:25.582988    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:25.582999    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:25.606971    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:25.606978    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:25.645050    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:25.645066    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:25.649453    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:25.649462    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:25.684145    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:25.684160    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:25.695793    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:25.695807    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:25.707343    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:25.707358    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:25.722602    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:25.722616    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:25.739018    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:25.739028    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:25.750492    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:25.750504    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:25.769518    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:25.769526    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:25.784513    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:25.784521    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:25.822651    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:25.822666    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:28.337123    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:33.339448    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:33.339699    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:33.376928    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:33.377022    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:33.389041    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:33.389132    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:33.399697    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:33.399780    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:33.410280    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:33.410362    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:33.420543    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:33.420623    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:33.431501    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:33.431586    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:33.441661    4520 logs.go:276] 0 containers: []
	W0924 12:15:33.441674    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:33.441746    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:33.452509    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:33.452527    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:33.452534    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:33.494586    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:33.494595    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:33.510670    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:33.510685    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:33.522763    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:33.522774    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:33.537605    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:33.537619    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:33.542180    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:33.542186    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:33.576632    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:33.576645    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:33.590040    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:33.590052    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:33.604648    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:33.604661    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:33.616236    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:33.616248    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:33.628400    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:33.628413    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:33.645848    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:33.645862    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:33.658394    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:33.658406    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:33.676049    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:33.676059    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:33.687414    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:33.687425    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:33.711747    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:33.711757    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:33.750046    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:33.750054    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:36.270691    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:41.273037    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:41.273238    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:41.287422    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:41.287505    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:41.299995    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:41.300085    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:41.310813    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:41.310895    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:41.320845    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:41.320921    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:41.331073    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:41.331148    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:41.341881    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:41.341965    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:41.352443    4520 logs.go:276] 0 containers: []
	W0924 12:15:41.352461    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:41.352540    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:41.362837    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:41.362857    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:41.362862    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:41.374438    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:41.374450    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:41.385954    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:41.385967    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:41.404781    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:41.404795    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:41.417526    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:41.417543    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:41.432352    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:41.432363    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:41.444068    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:41.444078    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:41.480574    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:41.480582    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:41.494896    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:41.494906    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:41.510078    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:41.510088    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:41.524660    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:41.524670    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:41.542128    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:41.542136    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:41.546554    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:41.546562    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:41.580569    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:41.580585    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:41.618453    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:41.618463    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:41.632320    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:41.632333    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:41.651567    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:41.651577    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:44.177157    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:49.179390    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:49.179649    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:49.213937    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:49.214073    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:49.246400    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:49.246494    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:49.262134    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:49.262220    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:49.272299    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:49.272385    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:49.282603    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:49.282686    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:49.293138    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:49.293211    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:49.303449    4520 logs.go:276] 0 containers: []
	W0924 12:15:49.303462    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:49.303537    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:49.314183    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:49.314200    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:49.314205    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:49.318419    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:49.318429    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:49.337053    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:49.337064    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:49.349249    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:49.349261    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:49.372949    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:49.372957    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:49.384173    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:49.384182    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:49.398298    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:49.398312    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:49.436496    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:49.436511    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:49.449895    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:49.449904    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:49.464930    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:49.464941    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:49.488614    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:49.488627    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:49.500517    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:49.500528    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:49.538400    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:49.538412    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:49.552232    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:49.552242    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:49.588082    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:49.588095    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:49.603081    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:49.603095    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:49.616115    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:49.616125    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:15:52.128438    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:15:57.130707    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:15:57.130905    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:15:57.145779    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:15:57.145872    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:15:57.157319    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:15:57.157398    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:15:57.169043    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:15:57.169124    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:15:57.180523    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:15:57.180613    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:15:57.191320    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:15:57.191403    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:15:57.211213    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:15:57.211295    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:15:57.221609    4520 logs.go:276] 0 containers: []
	W0924 12:15:57.221621    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:15:57.221690    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:15:57.232290    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:15:57.232308    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:15:57.232314    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:15:57.271832    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:15:57.271840    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:15:57.275932    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:15:57.275941    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:15:57.294392    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:15:57.294405    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:15:57.306161    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:15:57.306175    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:15:57.319936    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:15:57.319949    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:15:57.331450    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:15:57.331459    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:15:57.365520    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:15:57.365534    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:15:57.379771    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:15:57.379785    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:15:57.393673    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:15:57.393685    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:15:57.405041    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:15:57.405051    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:15:57.420724    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:15:57.420732    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:15:57.438010    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:15:57.438020    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:15:57.459972    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:15:57.459980    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:15:57.502474    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:15:57.502483    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:15:57.514830    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:15:57.514842    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:15:57.531633    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:15:57.531644    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:00.044954    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:05.045456    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:05.045680    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:05.063802    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:16:05.063911    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:05.076957    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:16:05.077047    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:05.088456    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:16:05.088527    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:05.098799    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:16:05.098886    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:05.108796    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:16:05.108873    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:05.119074    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:16:05.119159    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:05.128930    4520 logs.go:276] 0 containers: []
	W0924 12:16:05.128948    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:05.129025    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:05.139434    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:16:05.139455    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:05.139461    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:05.174743    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:16:05.174755    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:16:05.189545    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:16:05.189554    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:16:05.201524    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:16:05.201534    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:16:05.216274    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:16:05.216285    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:16:05.228074    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:16:05.228084    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:16:05.241341    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:16:05.241350    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:16:05.252671    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:05.252681    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:05.276147    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:05.276154    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:05.280612    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:16:05.280621    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:16:05.294451    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:16:05.294462    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:16:05.305557    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:16:05.305568    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:16:05.319718    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:16:05.319730    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:05.332778    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:05.332789    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:05.370862    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:16:05.370870    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:16:05.413222    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:16:05.413234    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:16:05.430383    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:16:05.430396    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:16:07.943879    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:12.945969    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:12.946125    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:12.958252    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:16:12.958351    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:12.969828    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:16:12.969916    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:12.980669    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:16:12.980753    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:12.991417    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:16:12.991505    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:13.001784    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:16:13.001870    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:13.012955    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:16:13.013031    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:13.023772    4520 logs.go:276] 0 containers: []
	W0924 12:16:13.023784    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:13.023860    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:13.033750    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:16:13.033769    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:16:13.033774    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:16:13.047690    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:16:13.047699    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:16:13.085975    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:16:13.085991    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:16:13.097645    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:16:13.097655    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:16:13.115722    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:16:13.115731    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:16:13.126696    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:13.126706    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:13.163860    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:16:13.163870    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:16:13.178399    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:16:13.178409    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:16:13.190269    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:16:13.190280    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:16:13.201487    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:13.201499    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:13.224944    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:16:13.224953    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:13.236676    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:13.236688    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:13.241261    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:13.241270    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:13.275773    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:16:13.275784    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:16:13.292612    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:16:13.292623    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:16:13.307148    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:16:13.307164    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:16:13.322437    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:16:13.322447    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:16:15.839346    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:20.841645    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:20.842241    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:16:20.879166    4520 logs.go:276] 2 containers: [1d75ba7f6e39 876b9146846d]
	I0924 12:16:20.879336    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:16:20.900710    4520 logs.go:276] 2 containers: [87b41293297a ea28f7380559]
	I0924 12:16:20.900829    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:16:20.917291    4520 logs.go:276] 1 containers: [115e170a518b]
	I0924 12:16:20.917390    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:16:20.929738    4520 logs.go:276] 2 containers: [29f9c0ff3c91 bb8ba6d324a9]
	I0924 12:16:20.929824    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:16:20.943040    4520 logs.go:276] 1 containers: [196550cf1443]
	I0924 12:16:20.943128    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:16:20.954183    4520 logs.go:276] 2 containers: [b2954efa65d3 089d88b4ee8a]
	I0924 12:16:20.954272    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:16:20.964951    4520 logs.go:276] 0 containers: []
	W0924 12:16:20.964963    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:16:20.965037    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:16:20.975539    4520 logs.go:276] 2 containers: [d69f2aa97c22 f6169f60077f]
	I0924 12:16:20.975557    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:16:20.975563    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:16:20.980102    4520 logs.go:123] Gathering logs for kube-scheduler [29f9c0ff3c91] ...
	I0924 12:16:20.980108    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9c0ff3c91"
	I0924 12:16:20.992411    4520 logs.go:123] Gathering logs for storage-provisioner [d69f2aa97c22] ...
	I0924 12:16:20.992420    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d69f2aa97c22"
	I0924 12:16:21.004475    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:16:21.004484    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:16:21.017684    4520 logs.go:123] Gathering logs for kube-apiserver [876b9146846d] ...
	I0924 12:16:21.017697    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876b9146846d"
	I0924 12:16:21.056004    4520 logs.go:123] Gathering logs for coredns [115e170a518b] ...
	I0924 12:16:21.056015    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 115e170a518b"
	I0924 12:16:21.068000    4520 logs.go:123] Gathering logs for kube-controller-manager [b2954efa65d3] ...
	I0924 12:16:21.068013    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2954efa65d3"
	I0924 12:16:21.091437    4520 logs.go:123] Gathering logs for kube-controller-manager [089d88b4ee8a] ...
	I0924 12:16:21.091446    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 089d88b4ee8a"
	I0924 12:16:21.110845    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:16:21.110854    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:16:21.152610    4520 logs.go:123] Gathering logs for etcd [87b41293297a] ...
	I0924 12:16:21.152623    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87b41293297a"
	I0924 12:16:21.168583    4520 logs.go:123] Gathering logs for etcd [ea28f7380559] ...
	I0924 12:16:21.168596    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea28f7380559"
	I0924 12:16:21.183629    4520 logs.go:123] Gathering logs for storage-provisioner [f6169f60077f] ...
	I0924 12:16:21.183645    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6169f60077f"
	I0924 12:16:21.194860    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:16:21.194871    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:16:21.216538    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:16:21.216546    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:16:21.253540    4520 logs.go:123] Gathering logs for kube-apiserver [1d75ba7f6e39] ...
	I0924 12:16:21.253550    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d75ba7f6e39"
	I0924 12:16:21.268074    4520 logs.go:123] Gathering logs for kube-scheduler [bb8ba6d324a9] ...
	I0924 12:16:21.268088    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb8ba6d324a9"
	I0924 12:16:21.282928    4520 logs.go:123] Gathering logs for kube-proxy [196550cf1443] ...
	I0924 12:16:21.282939    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 196550cf1443"
	I0924 12:16:23.796658    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:28.798927    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:28.799082    4520 kubeadm.go:597] duration metric: took 4m3.857838083s to restartPrimaryControlPlane
	W0924 12:16:28.799205    4520 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0924 12:16:28.799251    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0924 12:16:29.788346    4520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 12:16:29.793338    4520 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 12:16:29.796186    4520 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 12:16:29.798825    4520 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 12:16:29.798832    4520 kubeadm.go:157] found existing configuration files:
	
	I0924 12:16:29.798862    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/admin.conf
	I0924 12:16:29.801314    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 12:16:29.801347    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 12:16:29.804392    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/kubelet.conf
	I0924 12:16:29.807485    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 12:16:29.807508    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 12:16:29.810083    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/controller-manager.conf
	I0924 12:16:29.812602    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 12:16:29.812623    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 12:16:29.816026    4520 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/scheduler.conf
	I0924 12:16:29.819347    4520 kubeadm.go:163] "https://control-plane.minikube.internal:50530" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50530 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 12:16:29.819370    4520 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 12:16:29.822025    4520 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 12:16:29.838727    4520 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0924 12:16:29.838869    4520 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 12:16:29.885917    4520 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 12:16:29.885980    4520 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 12:16:29.886046    4520 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0924 12:16:29.940606    4520 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 12:16:29.944793    4520 out.go:235]   - Generating certificates and keys ...
	I0924 12:16:29.944831    4520 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 12:16:29.944865    4520 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 12:16:29.944915    4520 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0924 12:16:29.944951    4520 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0924 12:16:29.944986    4520 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0924 12:16:29.945020    4520 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0924 12:16:29.945093    4520 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0924 12:16:29.945125    4520 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0924 12:16:29.945163    4520 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0924 12:16:29.945204    4520 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0924 12:16:29.945227    4520 kubeadm.go:310] [certs] Using the existing "sa" key
	I0924 12:16:29.945259    4520 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 12:16:30.074324    4520 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 12:16:30.298264    4520 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 12:16:30.483238    4520 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 12:16:30.621913    4520 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 12:16:30.650585    4520 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 12:16:30.650952    4520 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 12:16:30.651044    4520 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 12:16:30.750133    4520 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 12:16:30.753759    4520 out.go:235]   - Booting up control plane ...
	I0924 12:16:30.753842    4520 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 12:16:30.753881    4520 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 12:16:30.753915    4520 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 12:16:30.754000    4520 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 12:16:30.766439    4520 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0924 12:16:35.268949    4520 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502323 seconds
	I0924 12:16:35.269030    4520 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 12:16:35.273506    4520 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 12:16:35.796010    4520 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 12:16:35.796527    4520 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-164000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 12:16:36.299967    4520 kubeadm.go:310] [bootstrap-token] Using token: c9u9by.23bn0i7xcp6mmhzp
	I0924 12:16:36.302920    4520 out.go:235]   - Configuring RBAC rules ...
	I0924 12:16:36.302984    4520 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 12:16:36.303035    4520 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 12:16:36.304952    4520 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 12:16:36.309627    4520 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 12:16:36.310567    4520 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 12:16:36.311427    4520 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 12:16:36.316021    4520 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 12:16:36.484223    4520 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 12:16:36.704005    4520 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 12:16:36.704500    4520 kubeadm.go:310] 
	I0924 12:16:36.704543    4520 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 12:16:36.704548    4520 kubeadm.go:310] 
	I0924 12:16:36.704585    4520 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 12:16:36.704658    4520 kubeadm.go:310] 
	I0924 12:16:36.704686    4520 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 12:16:36.704757    4520 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 12:16:36.704797    4520 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 12:16:36.704805    4520 kubeadm.go:310] 
	I0924 12:16:36.704834    4520 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 12:16:36.704842    4520 kubeadm.go:310] 
	I0924 12:16:36.704887    4520 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 12:16:36.704891    4520 kubeadm.go:310] 
	I0924 12:16:36.704958    4520 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 12:16:36.705001    4520 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 12:16:36.705037    4520 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 12:16:36.705040    4520 kubeadm.go:310] 
	I0924 12:16:36.705131    4520 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 12:16:36.705223    4520 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 12:16:36.705227    4520 kubeadm.go:310] 
	I0924 12:16:36.705317    4520 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c9u9by.23bn0i7xcp6mmhzp \
	I0924 12:16:36.705409    4520 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4250e15ce19ea6ee8d936fb77d1a59ad22f9367fb00a8a9aa9e1b7fb7d1933b3 \
	I0924 12:16:36.705458    4520 kubeadm.go:310] 	--control-plane 
	I0924 12:16:36.705462    4520 kubeadm.go:310] 
	I0924 12:16:36.705505    4520 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 12:16:36.705513    4520 kubeadm.go:310] 
	I0924 12:16:36.705584    4520 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c9u9by.23bn0i7xcp6mmhzp \
	I0924 12:16:36.705644    4520 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4250e15ce19ea6ee8d936fb77d1a59ad22f9367fb00a8a9aa9e1b7fb7d1933b3 
	I0924 12:16:36.705729    4520 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 12:16:36.705740    4520 cni.go:84] Creating CNI manager for ""
	I0924 12:16:36.705748    4520 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:16:36.710171    4520 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 12:16:36.717182    4520 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0924 12:16:36.720106    4520 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 12:16:36.725042    4520 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 12:16:36.725094    4520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 12:16:36.725108    4520 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-164000 minikube.k8s.io/updated_at=2024_09_24T12_16_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=stopped-upgrade-164000 minikube.k8s.io/primary=true
	I0924 12:16:36.766278    4520 ops.go:34] apiserver oom_adj: -16
	I0924 12:16:36.766295    4520 kubeadm.go:1113] duration metric: took 41.238708ms to wait for elevateKubeSystemPrivileges
	I0924 12:16:36.766433    4520 kubeadm.go:394] duration metric: took 4m11.84662725s to StartCluster
	I0924 12:16:36.766446    4520 settings.go:142] acquiring lock: {Name:mk8f5a1e4973fb47308ad8c9735bcc716ada1e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:16:36.766531    4520 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:16:36.766990    4520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/kubeconfig: {Name:mk406b8f0f5e016c0aa63af8364801bb91be8bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:16:36.767206    4520 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:16:36.767214    4520 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 12:16:36.767254    4520 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-164000"
	I0924 12:16:36.767272    4520 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-164000"
	W0924 12:16:36.767276    4520 addons.go:243] addon storage-provisioner should already be in state true
	I0924 12:16:36.767285    4520 host.go:66] Checking if "stopped-upgrade-164000" exists ...
	I0924 12:16:36.767294    4520 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-164000"
	I0924 12:16:36.767300    4520 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-164000"
	I0924 12:16:36.767284    4520 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:16:36.768229    4520 kapi.go:59] client config for stopped-upgrade-164000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/stopped-upgrade-164000/client.key", CAFile:"/Users/jenkins/minikube-integration/19700-1081/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10666e030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0924 12:16:36.768350    4520 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-164000"
	W0924 12:16:36.768354    4520 addons.go:243] addon default-storageclass should already be in state true
	I0924 12:16:36.768362    4520 host.go:66] Checking if "stopped-upgrade-164000" exists ...
	I0924 12:16:36.771187    4520 out.go:177] * Verifying Kubernetes components...
	I0924 12:16:36.771494    4520 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 12:16:36.775324    4520 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 12:16:36.775331    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	I0924 12:16:36.779132    4520 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 12:16:36.785163    4520 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 12:16:36.788192    4520 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 12:16:36.788198    4520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 12:16:36.788206    4520 sshutil.go:53] new ssh client: &{IP:localhost Port:50495 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/stopped-upgrade-164000/id_rsa Username:docker}
	I0924 12:16:36.874239    4520 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 12:16:36.878929    4520 api_server.go:52] waiting for apiserver process to appear ...
	I0924 12:16:36.878980    4520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 12:16:36.882822    4520 api_server.go:72] duration metric: took 115.606833ms to wait for apiserver process to appear ...
	I0924 12:16:36.882830    4520 api_server.go:88] waiting for apiserver healthz status ...
	I0924 12:16:36.882837    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:36.894928    4520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 12:16:36.951771    4520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 12:16:37.279623    4520 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0924 12:16:37.279635    4520 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0924 12:16:41.884898    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:41.884940    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:46.885541    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:46.885571    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:51.885945    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:51.885984    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:16:56.886520    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:16:56.886582    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:01.887372    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:01.887420    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:06.888435    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:06.888483    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0924 12:17:07.281681    4520 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0924 12:17:07.285827    4520 out.go:177] * Enabled addons: storage-provisioner
	I0924 12:17:07.297801    4520 addons.go:510] duration metric: took 30.530796166s for enable addons: enabled=[storage-provisioner]
	I0924 12:17:11.890034    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:11.890096    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:16.891712    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:16.891758    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:21.894035    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:21.894084    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:26.894679    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:26.894697    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:31.896834    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:31.896865    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:36.899047    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:36.899241    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:17:36.934765    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:17:36.934862    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:17:36.947199    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:17:36.947290    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:17:36.959095    4520 logs.go:276] 2 containers: [f604fbbda06a 777f3883522c]
	I0924 12:17:36.959175    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:17:36.970193    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:17:36.970276    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:17:36.980927    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:17:36.981002    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:17:36.991126    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:17:36.991204    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:17:37.001713    4520 logs.go:276] 0 containers: []
	W0924 12:17:37.001725    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:17:37.001796    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:17:37.012882    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:17:37.012898    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:17:37.012903    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:17:37.024688    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:17:37.024698    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:17:37.029669    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:17:37.029676    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:17:37.044148    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:17:37.044163    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:17:37.056311    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:17:37.056325    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:17:37.071481    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:17:37.071494    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:17:37.083365    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:17:37.083376    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:17:37.104992    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:17:37.105003    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:17:37.121615    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:17:37.121626    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:17:37.146365    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:17:37.146373    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:17:37.181663    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:17:37.181671    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:17:37.218874    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:17:37.218890    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:17:37.234224    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:17:37.234240    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:17:39.748973    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:44.751396    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:44.751997    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:17:44.789098    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:17:44.789250    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:17:44.811853    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:17:44.811967    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:17:44.825992    4520 logs.go:276] 2 containers: [f604fbbda06a 777f3883522c]
	I0924 12:17:44.826080    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:17:44.839631    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:17:44.839709    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:17:44.850348    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:17:44.850436    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:17:44.864696    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:17:44.864779    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:17:44.875059    4520 logs.go:276] 0 containers: []
	W0924 12:17:44.875071    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:17:44.875141    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:17:44.885098    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:17:44.885114    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:17:44.885120    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:17:44.896822    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:17:44.896839    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:17:44.911413    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:17:44.911423    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:17:44.922535    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:17:44.922544    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:17:44.936533    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:17:44.936545    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:17:44.947917    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:17:44.947928    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:17:44.982106    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:17:44.982123    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:17:44.996860    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:17:44.996869    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:17:45.008939    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:17:45.008955    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:17:45.026209    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:17:45.026220    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:17:45.044397    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:17:45.044407    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:17:45.071073    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:17:45.071088    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:17:45.106350    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:17:45.106359    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:17:47.612781    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:17:52.615615    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:17:52.616106    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:17:52.657063    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:17:52.657225    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:17:52.679712    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:17:52.679841    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:17:52.695095    4520 logs.go:276] 2 containers: [f604fbbda06a 777f3883522c]
	I0924 12:17:52.695183    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:17:52.707773    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:17:52.707854    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:17:52.719072    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:17:52.719147    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:17:52.729810    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:17:52.729875    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:17:52.740406    4520 logs.go:276] 0 containers: []
	W0924 12:17:52.740418    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:17:52.740485    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:17:52.755586    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:17:52.755602    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:17:52.755607    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:17:52.759925    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:17:52.759934    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:17:52.774565    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:17:52.774575    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:17:52.786541    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:17:52.786554    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:17:52.798645    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:17:52.798656    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:17:52.810446    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:17:52.810458    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:17:52.834170    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:17:52.834180    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:17:52.867574    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:17:52.867583    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:17:52.881947    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:17:52.881959    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:17:52.896294    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:17:52.896304    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:17:52.911969    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:17:52.911978    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:17:52.935126    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:17:52.935134    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:17:52.946162    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:17:52.946171    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:17:55.483531    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:18:00.485952    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:18:00.486416    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:18:00.520305    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:18:00.520492    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:18:00.540742    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:18:00.540869    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:18:00.556732    4520 logs.go:276] 2 containers: [f604fbbda06a 777f3883522c]
	I0924 12:18:00.556822    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:18:00.570132    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:18:00.570207    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:18:00.580596    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:18:00.580682    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:18:00.594972    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:18:00.595042    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:18:00.607441    4520 logs.go:276] 0 containers: []
	W0924 12:18:00.607451    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:18:00.607508    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:18:00.618415    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:18:00.618432    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:18:00.618436    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:18:00.622762    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:18:00.622771    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:18:00.655653    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:18:00.655669    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:18:00.670580    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:18:00.670589    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:18:00.684513    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:18:00.684523    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:18:00.696306    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:18:00.696317    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:18:00.708904    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:18:00.708917    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:18:00.724360    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:18:00.724377    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:18:00.737904    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:18:00.737918    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:18:00.750953    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:18:00.750964    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:18:00.786260    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:18:00.786279    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:18:00.805423    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:18:00.805441    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:18:00.825480    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:18:00.825494    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:18:03.352607    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:18:08.354763    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:18:08.354844    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:18:08.366941    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:18:08.367022    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:18:08.378706    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:18:08.378784    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:18:08.390804    4520 logs.go:276] 2 containers: [f604fbbda06a 777f3883522c]
	I0924 12:18:08.390881    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:18:08.402144    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:18:08.402207    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:18:08.413186    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:18:08.413260    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:18:08.427338    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:18:08.427410    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:18:08.437454    4520 logs.go:276] 0 containers: []
	W0924 12:18:08.437465    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:18:08.437534    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:18:08.447708    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:18:08.447721    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:18:08.447726    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:18:08.460059    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:18:08.460071    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:18:08.473057    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:18:08.473066    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:18:08.508440    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:18:08.508448    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:18:08.520246    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:18:08.520259    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:18:08.531669    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:18:08.531682    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:18:08.542877    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:18:08.542890    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:18:08.557630    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:18:08.557639    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:18:08.575736    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:18:08.575745    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:18:08.598812    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:18:08.598819    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:18:08.602742    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:18:08.602749    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:18:08.636369    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:18:08.636381    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:18:08.654647    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:18:08.654660    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:18:11.170651    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:18:16.173452    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:18:16.174067    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:18:16.213158    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:18:16.213322    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:18:16.236557    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:18:16.236689    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:18:16.251855    4520 logs.go:276] 2 containers: [f604fbbda06a 777f3883522c]
	I0924 12:18:16.251948    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:18:16.263927    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:18:16.264013    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:18:16.274404    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:18:16.274479    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:18:16.288773    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:18:16.288844    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:18:16.298791    4520 logs.go:276] 0 containers: []
	W0924 12:18:16.298802    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:18:16.298864    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:18:16.311978    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:18:16.311994    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:18:16.312000    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:18:16.328793    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:18:16.328804    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:18:16.339833    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:18:16.339843    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:18:16.374045    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:18:16.374054    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:18:16.378428    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:18:16.378437    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:18:16.412571    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:18:16.412587    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:18:16.427224    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:18:16.427237    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:18:16.438360    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:18:16.438372    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:18:16.450572    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:18:16.450582    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:18:16.475143    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:18:16.475154    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:18:16.488363    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:18:16.488373    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:18:16.505372    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:18:16.505383    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:18:16.516812    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:18:16.516821    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:18:19.030639    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:18:24.033438    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:18:24.033998    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:18:24.073526    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:18:24.073697    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:18:24.094557    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:18:24.094698    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:18:24.110446    4520 logs.go:276] 2 containers: [f604fbbda06a 777f3883522c]
	I0924 12:18:24.110528    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:18:24.122353    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:18:24.122438    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:18:24.133678    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:18:24.133757    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:18:24.151726    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:18:24.151802    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:18:24.162230    4520 logs.go:276] 0 containers: []
	W0924 12:18:24.162243    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:18:24.162312    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:18:24.172570    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:18:24.172585    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:18:24.172591    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:18:24.183835    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:18:24.183845    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:18:24.204860    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:18:24.204870    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:18:24.240082    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:18:24.240092    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:18:24.254537    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:18:24.254546    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:18:24.269285    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:18:24.269295    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:18:24.280533    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:18:24.280544    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:18:24.291877    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:18:24.291889    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:18:24.303455    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:18:24.303468    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:18:24.327845    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:18:24.327852    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:18:24.340526    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:18:24.340537    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:18:24.345355    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:18:24.345365    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:18:24.383010    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:18:24.383026    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:18:26.901198    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:18:31.903550    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:18:31.904038    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:18:31.938602    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:18:31.938775    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:18:31.960613    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:18:31.960751    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:18:31.976846    4520 logs.go:276] 2 containers: [f604fbbda06a 777f3883522c]
	I0924 12:18:31.976943    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:18:31.988248    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:18:31.988338    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:18:31.998839    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:18:31.998925    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:18:32.009599    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:18:32.009681    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:18:32.020082    4520 logs.go:276] 0 containers: []
	W0924 12:18:32.020093    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:18:32.020159    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:18:32.030494    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:18:32.030510    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:18:32.030515    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:18:32.044679    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:18:32.044688    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:18:32.056193    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:18:32.056206    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:18:32.079804    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:18:32.079810    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:18:32.092062    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:18:32.092076    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:18:32.126469    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:18:32.126479    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:18:32.130647    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:18:32.130652    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:18:32.144333    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:18:32.144345    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:18:32.155718    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:18:32.155734    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:18:32.167239    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:18:32.167254    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:18:32.185162    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:18:32.185171    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:18:32.221419    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:18:32.221434    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:18:32.235762    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:18:32.235772    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:18:34.749684    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:18:39.751624    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:18:39.751866    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:18:39.775511    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:18:39.775604    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:18:39.786432    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:18:39.786507    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:18:39.797320    4520 logs.go:276] 2 containers: [f604fbbda06a 777f3883522c]
	I0924 12:18:39.797399    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:18:39.807845    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:18:39.807917    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:18:39.818342    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:18:39.818409    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:18:39.833163    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:18:39.833243    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:18:39.843684    4520 logs.go:276] 0 containers: []
	W0924 12:18:39.843694    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:18:39.843752    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:18:39.854450    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:18:39.854466    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:18:39.854472    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:18:39.866439    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:18:39.866451    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:18:39.883550    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:18:39.883559    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:18:39.895220    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:18:39.895234    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:18:39.907019    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:18:39.907029    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:18:39.939926    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:18:39.939934    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:18:39.943933    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:18:39.943941    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:18:39.977716    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:18:39.977725    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:18:39.994157    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:18:39.994167    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:18:40.017337    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:18:40.017346    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:18:40.031344    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:18:40.031352    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:18:40.043047    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:18:40.043056    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:18:40.057493    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:18:40.057502    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:18:42.571437    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:18:47.572557    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:18:47.573058    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:18:47.615104    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:18:47.615252    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:18:47.638050    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:18:47.638196    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:18:47.653168    4520 logs.go:276] 2 containers: [f604fbbda06a 777f3883522c]
	I0924 12:18:47.653261    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:18:47.665972    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:18:47.666037    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:18:47.676887    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:18:47.676957    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:18:47.687736    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:18:47.687806    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:18:47.697512    4520 logs.go:276] 0 containers: []
	W0924 12:18:47.697526    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:18:47.697585    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:18:47.710265    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:18:47.710286    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:18:47.710292    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:18:47.770773    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:18:47.770786    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:18:47.791909    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:18:47.791926    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:18:47.832337    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:18:47.832355    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:18:47.837500    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:18:47.837509    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:18:47.851327    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:18:47.851340    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:18:47.865578    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:18:47.865588    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:18:47.877473    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:18:47.877487    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:18:47.893982    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:18:47.894001    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:18:47.917916    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:18:47.917930    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:18:47.945494    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:18:47.945507    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:18:47.964314    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:18:47.964327    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:18:47.990684    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:18:47.990699    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:18:50.508659    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:18:55.511427    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:18:55.511972    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:18:55.550892    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:18:55.551049    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:18:55.572372    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:18:55.572492    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:18:55.595208    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:18:55.595300    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:18:55.607406    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:18:55.607486    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:18:55.618020    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:18:55.618097    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:18:55.629108    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:18:55.629177    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:18:55.639876    4520 logs.go:276] 0 containers: []
	W0924 12:18:55.639886    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:18:55.639942    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:18:55.650242    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:18:55.650258    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:18:55.650264    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:18:55.662037    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:18:55.662049    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:18:55.676456    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:18:55.676466    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:18:55.688079    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:18:55.688090    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:18:55.699838    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:18:55.699847    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:18:55.715175    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:18:55.715186    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:18:55.734230    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:18:55.734241    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:18:55.738467    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:18:55.738473    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:18:55.754036    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:18:55.754051    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:18:55.765791    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:18:55.765801    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:18:55.777146    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:18:55.777156    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:18:55.800861    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:18:55.800868    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:18:55.812413    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:18:55.812429    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:18:55.847839    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:18:55.847847    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:18:55.883130    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:18:55.883143    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:18:58.406438    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:19:03.409281    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:19:03.409904    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:19:03.448949    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:19:03.449120    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:19:03.470989    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:19:03.471124    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:19:03.489262    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:19:03.489360    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:19:03.501656    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:19:03.501745    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:19:03.513459    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:19:03.513534    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:19:03.530984    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:19:03.531080    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:19:03.544782    4520 logs.go:276] 0 containers: []
	W0924 12:19:03.544802    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:19:03.544913    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:19:03.556586    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:19:03.556604    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:19:03.556611    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:19:03.579290    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:19:03.579303    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:19:03.604943    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:19:03.604960    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:19:03.610418    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:19:03.610432    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:19:03.623486    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:19:03.623498    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:19:03.636530    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:19:03.636543    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:19:03.649967    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:19:03.649979    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:19:03.689933    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:19:03.689946    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:19:03.702960    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:19:03.702973    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:19:03.721686    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:19:03.721702    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:19:03.757504    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:19:03.757524    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:19:03.774368    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:19:03.774382    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:19:03.788333    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:19:03.788345    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:19:03.803468    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:19:03.803482    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:19:03.820216    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:19:03.820228    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:19:06.335224    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:19:11.338021    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:19:11.338527    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:19:11.377774    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:19:11.377933    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:19:11.400327    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:19:11.400459    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:19:11.416482    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:19:11.416570    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:19:11.429066    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:19:11.429145    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:19:11.439837    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:19:11.439925    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:19:11.450093    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:19:11.450178    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:19:11.460198    4520 logs.go:276] 0 containers: []
	W0924 12:19:11.460209    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:19:11.460277    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:19:11.470758    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:19:11.470778    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:19:11.470784    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:19:11.482152    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:19:11.482162    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:19:11.497355    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:19:11.497364    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:19:11.515922    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:19:11.515932    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:19:11.540736    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:19:11.540744    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:19:11.552833    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:19:11.552844    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:19:11.557701    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:19:11.557709    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:19:11.592367    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:19:11.592377    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:19:11.604952    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:19:11.604962    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:19:11.640380    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:19:11.640390    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:19:11.652389    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:19:11.652399    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:19:11.664218    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:19:11.664228    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:19:11.675822    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:19:11.675832    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:19:11.691811    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:19:11.691820    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:19:11.706476    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:19:11.706487    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:19:14.220307    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:19:19.222778    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:19:19.223247    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:19:19.255137    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:19:19.255299    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:19:19.275699    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:19:19.275827    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:19:19.292082    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:19:19.292172    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:19:19.303632    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:19:19.303707    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:19:19.313821    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:19:19.313892    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:19:19.328307    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:19:19.328396    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:19:19.338567    4520 logs.go:276] 0 containers: []
	W0924 12:19:19.338578    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:19:19.338654    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:19:19.348929    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:19:19.348948    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:19:19.348954    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:19:19.361063    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:19:19.361075    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:19:19.387363    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:19:19.387375    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:19:19.398981    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:19:19.398996    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:19:19.413120    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:19:19.413133    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:19:19.427755    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:19:19.427768    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:19:19.443231    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:19:19.443241    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:19:19.455553    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:19:19.455563    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:19:19.459858    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:19:19.459867    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:19:19.493737    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:19:19.493753    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:19:19.511257    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:19:19.511266    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:19:19.543971    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:19:19.543981    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:19:19.561369    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:19:19.561379    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:19:19.575007    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:19:19.575021    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:19:19.586448    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:19:19.586459    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:19:22.100394    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:19:27.103102    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:19:27.103305    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:19:27.119733    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:19:27.119811    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:19:27.130002    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:19:27.130080    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:19:27.140461    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:19:27.140546    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:19:27.150412    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:19:27.150484    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:19:27.160908    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:19:27.160985    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:19:27.171226    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:19:27.171291    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:19:27.181172    4520 logs.go:276] 0 containers: []
	W0924 12:19:27.181183    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:19:27.181247    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:19:27.191282    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:19:27.191300    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:19:27.191306    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:19:27.224685    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:19:27.224693    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:19:27.229219    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:19:27.229230    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:19:27.241250    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:19:27.241260    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:19:27.256648    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:19:27.256658    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:19:27.268323    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:19:27.268332    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:19:27.304254    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:19:27.304268    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:19:27.319077    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:19:27.319086    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:19:27.333322    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:19:27.333332    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:19:27.347968    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:19:27.347977    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:19:27.363332    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:19:27.363347    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:19:27.380786    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:19:27.380795    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:19:27.392050    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:19:27.392064    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:19:27.403073    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:19:27.403082    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:19:27.419983    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:19:27.419992    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:19:29.945962    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:19:34.947837    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:19:34.948271    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:19:34.992198    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:19:34.992313    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:19:35.011417    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:19:35.011508    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:19:35.025039    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:19:35.025131    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:19:35.037146    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:19:35.037226    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:19:35.049797    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:19:35.049879    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:19:35.060982    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:19:35.061063    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:19:35.073630    4520 logs.go:276] 0 containers: []
	W0924 12:19:35.073646    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:19:35.073719    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:19:35.084297    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:19:35.084318    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:19:35.084325    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:19:35.098562    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:19:35.098574    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:19:35.111132    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:19:35.111148    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:19:35.123122    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:19:35.123134    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:19:35.134892    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:19:35.134906    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:19:35.146733    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:19:35.146746    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:19:35.168578    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:19:35.168588    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:19:35.194209    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:19:35.194217    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:19:35.230455    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:19:35.230467    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:19:35.244728    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:19:35.244736    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:19:35.256754    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:19:35.256764    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:19:35.271399    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:19:35.271410    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:19:35.283186    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:19:35.283194    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:19:35.318182    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:19:35.318190    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:19:35.322149    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:19:35.322155    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:19:37.836545    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:19:42.838860    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:19:42.839438    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:19:42.881256    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:19:42.881409    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:19:42.903117    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:19:42.903248    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:19:42.918832    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:19:42.918926    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:19:42.931233    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:19:42.931313    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:19:42.941845    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:19:42.941918    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:19:42.952629    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:19:42.952706    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:19:42.963435    4520 logs.go:276] 0 containers: []
	W0924 12:19:42.963445    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:19:42.963509    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:19:42.973943    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:19:42.973962    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:19:42.973968    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:19:42.985964    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:19:42.985979    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:19:42.990652    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:19:42.990663    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:19:43.060975    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:19:43.060991    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:19:43.073417    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:19:43.073431    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:19:43.099049    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:19:43.099061    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:19:43.135747    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:19:43.135758    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:19:43.149846    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:19:43.149855    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:19:43.162321    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:19:43.162331    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:19:43.178058    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:19:43.178069    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:19:43.190122    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:19:43.190138    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:19:43.202392    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:19:43.202406    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:19:43.219564    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:19:43.219577    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:19:43.233531    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:19:43.233544    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:19:43.261929    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:19:43.261941    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:19:45.775582    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:19:50.776567    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:19:50.777008    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:19:50.820132    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:19:50.820282    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:19:50.847302    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:19:50.847429    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:19:50.864497    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:19:50.864596    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:19:50.876003    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:19:50.876085    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:19:50.888279    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:19:50.888364    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:19:50.900656    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:19:50.900731    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:19:50.911181    4520 logs.go:276] 0 containers: []
	W0924 12:19:50.911191    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:19:50.911254    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:19:50.921611    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:19:50.921627    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:19:50.921633    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:19:50.938881    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:19:50.938891    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:19:50.950560    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:19:50.950571    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:19:50.963211    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:19:50.963221    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:19:50.975081    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:19:50.975091    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:19:50.986898    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:19:50.986913    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:19:51.022295    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:19:51.022303    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:19:51.033983    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:19:51.033997    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:19:51.058595    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:19:51.058602    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:19:51.070524    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:19:51.070534    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:19:51.085022    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:19:51.085032    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:19:51.096146    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:19:51.096156    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:19:51.101108    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:19:51.101117    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:19:51.136993    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:19:51.137002    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:19:51.151653    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:19:51.151664    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:19:53.667612    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:19:58.669866    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:19:58.670457    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:19:58.711720    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:19:58.711879    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:19:58.736007    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:19:58.736145    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:19:58.751114    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:19:58.751204    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:19:58.763185    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:19:58.763261    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:19:58.773738    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:19:58.773823    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:19:58.786669    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:19:58.786740    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:19:58.797360    4520 logs.go:276] 0 containers: []
	W0924 12:19:58.797373    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:19:58.797431    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:19:58.808133    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:19:58.808156    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:19:58.808162    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:19:58.812556    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:19:58.812564    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:19:58.824861    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:19:58.824871    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:19:58.837646    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:19:58.837655    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:19:58.849153    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:19:58.849162    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:19:58.872596    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:19:58.872605    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:19:58.884251    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:19:58.884266    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:19:58.920152    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:19:58.920168    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:19:58.938634    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:19:58.938644    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:19:58.950644    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:19:58.950660    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:19:58.974842    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:19:58.974851    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:19:58.987148    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:19:58.987158    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:19:59.020765    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:19:59.020773    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:19:59.035525    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:19:59.035541    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:19:59.046927    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:19:59.046936    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:20:01.564356    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:20:06.566710    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:20:06.566890    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:20:06.580266    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:20:06.580368    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:20:06.592560    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:20:06.592710    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:20:06.610528    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:20:06.610617    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:20:06.621066    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:20:06.621145    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:20:06.631200    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:20:06.631281    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:20:06.640999    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:20:06.641083    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:20:06.651234    4520 logs.go:276] 0 containers: []
	W0924 12:20:06.651248    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:20:06.651315    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:20:06.661376    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:20:06.661401    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:20:06.661408    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:20:06.672462    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:20:06.672471    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:20:06.683888    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:20:06.683896    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:20:06.718068    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:20:06.718076    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:20:06.733093    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:20:06.733102    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:20:06.744600    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:20:06.744608    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:20:06.756120    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:20:06.756129    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:20:06.771642    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:20:06.771651    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:20:06.785472    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:20:06.785481    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:20:06.808556    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:20:06.808565    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:20:06.826562    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:20:06.826573    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:20:06.830681    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:20:06.830688    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:20:06.866374    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:20:06.866385    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:20:06.880693    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:20:06.880704    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:20:06.895440    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:20:06.895454    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:20:09.409169    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:20:14.410666    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:20:14.410793    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:20:14.423055    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:20:14.423169    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:20:14.433922    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:20:14.434007    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:20:14.444636    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:20:14.444721    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:20:14.456113    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:20:14.456201    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:20:14.466140    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:20:14.466218    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:20:14.476671    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:20:14.476750    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:20:14.486887    4520 logs.go:276] 0 containers: []
	W0924 12:20:14.486897    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:20:14.486978    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:20:14.497237    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:20:14.497256    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:20:14.497261    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:20:14.530762    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:20:14.530769    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:20:14.567207    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:20:14.567216    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:20:14.578620    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:20:14.578631    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:20:14.596304    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:20:14.596318    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:20:14.621271    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:20:14.621282    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:20:14.632424    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:20:14.632436    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:20:14.644490    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:20:14.644503    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:20:14.658083    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:20:14.658097    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:20:14.669493    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:20:14.669505    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:20:14.683593    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:20:14.683603    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:20:14.695390    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:20:14.695401    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:20:14.699541    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:20:14.699551    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:20:14.713997    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:20:14.714011    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:20:14.725667    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:20:14.725678    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:20:17.238939    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:20:22.241320    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:20:22.241874    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:20:22.280949    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:20:22.281108    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:20:22.302432    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:20:22.302546    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:20:22.318423    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:20:22.318513    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:20:22.330878    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:20:22.330963    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:20:22.341520    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:20:22.341594    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:20:22.352949    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:20:22.353033    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:20:22.363366    4520 logs.go:276] 0 containers: []
	W0924 12:20:22.363376    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:20:22.363438    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:20:22.373744    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:20:22.373764    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:20:22.373770    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:20:22.388875    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:20:22.388886    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:20:22.400502    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:20:22.400512    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:20:22.424355    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:20:22.424366    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:20:22.435872    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:20:22.435884    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:20:22.450129    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:20:22.450143    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:20:22.484653    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:20:22.484667    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:20:22.498901    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:20:22.498914    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:20:22.510728    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:20:22.510737    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:20:22.523476    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:20:22.523487    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:20:22.537776    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:20:22.537789    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:20:22.549523    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:20:22.549535    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:20:22.584627    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:20:22.584637    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:20:22.589146    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:20:22.589152    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:20:22.600679    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:20:22.600692    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:20:25.122963    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:20:30.125531    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:20:30.125608    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0924 12:20:30.138217    4520 logs.go:276] 1 containers: [6b3aa4b926f4]
	I0924 12:20:30.138281    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0924 12:20:30.149290    4520 logs.go:276] 1 containers: [9b8154af5127]
	I0924 12:20:30.149357    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0924 12:20:30.160802    4520 logs.go:276] 4 containers: [78b62aa20afa 80cc8a877d7f f604fbbda06a 777f3883522c]
	I0924 12:20:30.160882    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0924 12:20:30.176618    4520 logs.go:276] 1 containers: [032939cb4c30]
	I0924 12:20:30.176694    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0924 12:20:30.188983    4520 logs.go:276] 1 containers: [f84ea9b6522c]
	I0924 12:20:30.189040    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0924 12:20:30.200912    4520 logs.go:276] 1 containers: [4ddc6c70aca0]
	I0924 12:20:30.200992    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0924 12:20:30.212261    4520 logs.go:276] 0 containers: []
	W0924 12:20:30.212273    4520 logs.go:278] No container was found matching "kindnet"
	I0924 12:20:30.212347    4520 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0924 12:20:30.225183    4520 logs.go:276] 1 containers: [0cd382ca5523]
	I0924 12:20:30.225199    4520 logs.go:123] Gathering logs for etcd [9b8154af5127] ...
	I0924 12:20:30.225205    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b8154af5127"
	I0924 12:20:30.240232    4520 logs.go:123] Gathering logs for coredns [f604fbbda06a] ...
	I0924 12:20:30.240243    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f604fbbda06a"
	I0924 12:20:30.252776    4520 logs.go:123] Gathering logs for kube-controller-manager [4ddc6c70aca0] ...
	I0924 12:20:30.252787    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ddc6c70aca0"
	I0924 12:20:30.271874    4520 logs.go:123] Gathering logs for container status ...
	I0924 12:20:30.271889    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 12:20:30.283966    4520 logs.go:123] Gathering logs for kube-apiserver [6b3aa4b926f4] ...
	I0924 12:20:30.283976    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3aa4b926f4"
	I0924 12:20:30.298897    4520 logs.go:123] Gathering logs for coredns [78b62aa20afa] ...
	I0924 12:20:30.298908    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78b62aa20afa"
	I0924 12:20:30.315278    4520 logs.go:123] Gathering logs for coredns [80cc8a877d7f] ...
	I0924 12:20:30.315288    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80cc8a877d7f"
	I0924 12:20:30.331621    4520 logs.go:123] Gathering logs for coredns [777f3883522c] ...
	I0924 12:20:30.331631    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 777f3883522c"
	I0924 12:20:30.344594    4520 logs.go:123] Gathering logs for kube-scheduler [032939cb4c30] ...
	I0924 12:20:30.344606    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032939cb4c30"
	I0924 12:20:30.360730    4520 logs.go:123] Gathering logs for storage-provisioner [0cd382ca5523] ...
	I0924 12:20:30.360746    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0cd382ca5523"
	I0924 12:20:30.372969    4520 logs.go:123] Gathering logs for dmesg ...
	I0924 12:20:30.372979    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 12:20:30.377149    4520 logs.go:123] Gathering logs for kube-proxy [f84ea9b6522c] ...
	I0924 12:20:30.377156    4520 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f84ea9b6522c"
	I0924 12:20:30.389447    4520 logs.go:123] Gathering logs for Docker ...
	I0924 12:20:30.389458    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0924 12:20:30.416156    4520 logs.go:123] Gathering logs for kubelet ...
	I0924 12:20:30.416166    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0924 12:20:30.451918    4520 logs.go:123] Gathering logs for describe nodes ...
	I0924 12:20:30.451932    4520 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 12:20:32.993394    4520 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0924 12:20:37.995761    4520 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0924 12:20:38.001045    4520 out.go:201] 
	W0924 12:20:38.004264    4520 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0924 12:20:38.004272    4520 out.go:270] * 
	* 
	W0924 12:20:38.004725    4520 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:20:38.024199    4520 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-164000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.40s)

                                                
                                    
x
+
TestPause/serial/Start (9.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-767000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-767000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.894518709s)

                                                
                                                
-- stdout --
	* [pause-767000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-767000" primary control-plane node in "pause-767000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-767000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-767000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-767000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-767000 -n pause-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-767000 -n pause-767000: exit status 7 (30.447584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-767000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 : exit status 80 (9.793877292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-339000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-339000" primary control-plane node in "NoKubernetes-339000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-339000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-339000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000: exit status 7 (53.353666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-339000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 : exit status 80 (5.240759917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-339000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-339000
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-339000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000: exit status 7 (45.135875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-339000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 : exit status 80 (5.264913958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-339000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-339000
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-339000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000: exit status 7 (64.403125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-339000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 : exit status 80 (5.237632875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-339000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-339000
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-339000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-339000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-339000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-339000 -n NoKubernetes-339000: exit status 7 (66.490958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-339000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.906787042s)

                                                
                                                
-- stdout --
	* [auto-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-138000" primary control-plane node in "auto-138000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-138000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:18:53.631357    4728 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:18:53.631502    4728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:18:53.631505    4728 out.go:358] Setting ErrFile to fd 2...
	I0924 12:18:53.631507    4728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:18:53.631646    4728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:18:53.632704    4728 out.go:352] Setting JSON to false
	I0924 12:18:53.648978    4728 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4704,"bootTime":1727200829,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:18:53.649047    4728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:18:53.654419    4728 out.go:177] * [auto-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:18:53.662328    4728 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:18:53.662363    4728 notify.go:220] Checking for updates...
	I0924 12:18:53.669340    4728 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:18:53.672354    4728 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:18:53.675269    4728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:18:53.678303    4728 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:18:53.681256    4728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:18:53.684702    4728 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:18:53.684765    4728 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:18:53.684820    4728 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:18:53.689316    4728 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:18:53.696357    4728 start.go:297] selected driver: qemu2
	I0924 12:18:53.696363    4728 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:18:53.696370    4728 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:18:53.698572    4728 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:18:53.701267    4728 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:18:53.702373    4728 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:18:53.702389    4728 cni.go:84] Creating CNI manager for ""
	I0924 12:18:53.702409    4728 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:18:53.702417    4728 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:18:53.702449    4728 start.go:340] cluster config:
	{Name:auto-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:18:53.705992    4728 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:18:53.713333    4728 out.go:177] * Starting "auto-138000" primary control-plane node in "auto-138000" cluster
	I0924 12:18:53.717264    4728 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:18:53.717282    4728 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:18:53.717289    4728 cache.go:56] Caching tarball of preloaded images
	I0924 12:18:53.717350    4728 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:18:53.717359    4728 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:18:53.717416    4728 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/auto-138000/config.json ...
	I0924 12:18:53.717427    4728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/auto-138000/config.json: {Name:mk91da82385d64e17071595a0754e677cec288b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:18:53.717680    4728 start.go:360] acquireMachinesLock for auto-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:18:53.717710    4728 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "auto-138000"
	I0924 12:18:53.717722    4728 start.go:93] Provisioning new machine with config: &{Name:auto-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:18:53.717752    4728 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:18:53.723361    4728 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:18:53.738466    4728 start.go:159] libmachine.API.Create for "auto-138000" (driver="qemu2")
	I0924 12:18:53.738490    4728 client.go:168] LocalClient.Create starting
	I0924 12:18:53.738554    4728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:18:53.738586    4728 main.go:141] libmachine: Decoding PEM data...
	I0924 12:18:53.738593    4728 main.go:141] libmachine: Parsing certificate...
	I0924 12:18:53.738629    4728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:18:53.738651    4728 main.go:141] libmachine: Decoding PEM data...
	I0924 12:18:53.738658    4728 main.go:141] libmachine: Parsing certificate...
	I0924 12:18:53.739093    4728 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:18:53.904484    4728 main.go:141] libmachine: Creating SSH key...
	I0924 12:18:54.084748    4728 main.go:141] libmachine: Creating Disk image...
	I0924 12:18:54.084759    4728 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:18:54.085035    4728 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2
	I0924 12:18:54.094757    4728 main.go:141] libmachine: STDOUT: 
	I0924 12:18:54.094784    4728 main.go:141] libmachine: STDERR: 
	I0924 12:18:54.094848    4728 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2 +20000M
	I0924 12:18:54.103044    4728 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:18:54.103061    4728 main.go:141] libmachine: STDERR: 
	I0924 12:18:54.103077    4728 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2
	I0924 12:18:54.103080    4728 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:18:54.103093    4728 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:18:54.103116    4728 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:98:bd:5c:5f:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2
	I0924 12:18:54.104745    4728 main.go:141] libmachine: STDOUT: 
	I0924 12:18:54.104768    4728 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:18:54.104788    4728 client.go:171] duration metric: took 366.294375ms to LocalClient.Create
	I0924 12:18:56.105716    4728 start.go:128] duration metric: took 2.387968416s to createHost
	I0924 12:18:56.105745    4728 start.go:83] releasing machines lock for "auto-138000", held for 2.388046083s
	W0924 12:18:56.105795    4728 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:18:56.110838    4728 out.go:177] * Deleting "auto-138000" in qemu2 ...
	W0924 12:18:56.137270    4728 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:18:56.137282    4728 start.go:729] Will try again in 5 seconds ...
	I0924 12:19:01.139501    4728 start.go:360] acquireMachinesLock for auto-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:19:01.140099    4728 start.go:364] duration metric: took 497.875µs to acquireMachinesLock for "auto-138000"
	I0924 12:19:01.140251    4728 start.go:93] Provisioning new machine with config: &{Name:auto-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:19:01.140446    4728 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:19:01.151249    4728 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:19:01.202624    4728 start.go:159] libmachine.API.Create for "auto-138000" (driver="qemu2")
	I0924 12:19:01.202682    4728 client.go:168] LocalClient.Create starting
	I0924 12:19:01.202827    4728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:19:01.202906    4728 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:01.202923    4728 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:01.202997    4728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:19:01.203044    4728 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:01.203061    4728 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:01.203584    4728 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:19:01.375672    4728 main.go:141] libmachine: Creating SSH key...
	I0924 12:19:01.455788    4728 main.go:141] libmachine: Creating Disk image...
	I0924 12:19:01.455798    4728 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:19:01.456043    4728 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2
	I0924 12:19:01.465563    4728 main.go:141] libmachine: STDOUT: 
	I0924 12:19:01.465587    4728 main.go:141] libmachine: STDERR: 
	I0924 12:19:01.465652    4728 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2 +20000M
	I0924 12:19:01.473586    4728 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:19:01.473601    4728 main.go:141] libmachine: STDERR: 
	I0924 12:19:01.473613    4728 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2
	I0924 12:19:01.473618    4728 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:19:01.473628    4728 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:19:01.473662    4728 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:00:a1:13:8d:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/auto-138000/disk.qcow2
	I0924 12:19:01.475317    4728 main.go:141] libmachine: STDOUT: 
	I0924 12:19:01.475332    4728 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:19:01.475345    4728 client.go:171] duration metric: took 272.657166ms to LocalClient.Create
	I0924 12:19:03.477403    4728 start.go:128] duration metric: took 2.336949916s to createHost
	I0924 12:19:03.477420    4728 start.go:83] releasing machines lock for "auto-138000", held for 2.337318875s
	W0924 12:19:03.477552    4728 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:03.486768    4728 out.go:201] 
	W0924 12:19:03.490842    4728 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:19:03.490849    4728 out.go:270] * 
	* 
	W0924 12:19:03.491440    4728 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:19:03.501783    4728 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.786451792s)

                                                
                                                
-- stdout --
	* [kindnet-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-138000" primary control-plane node in "kindnet-138000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-138000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:19:05.720622    4838 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:19:05.720751    4838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:19:05.720754    4838 out.go:358] Setting ErrFile to fd 2...
	I0924 12:19:05.720757    4838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:19:05.720878    4838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:19:05.721907    4838 out.go:352] Setting JSON to false
	I0924 12:19:05.738398    4838 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4716,"bootTime":1727200829,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:19:05.738471    4838 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:19:05.742973    4838 out.go:177] * [kindnet-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:19:05.747052    4838 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:19:05.747140    4838 notify.go:220] Checking for updates...
	I0924 12:19:05.753994    4838 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:19:05.757000    4838 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:19:05.759904    4838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:19:05.762974    4838 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:19:05.766011    4838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:19:05.769309    4838 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:19:05.769369    4838 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:19:05.769417    4838 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:19:05.773968    4838 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:19:05.780960    4838 start.go:297] selected driver: qemu2
	I0924 12:19:05.780971    4838 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:19:05.780979    4838 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:19:05.783321    4838 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:19:05.785964    4838 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:19:05.789081    4838 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:19:05.789104    4838 cni.go:84] Creating CNI manager for "kindnet"
	I0924 12:19:05.789112    4838 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 12:19:05.789146    4838 start.go:340] cluster config:
	{Name:kindnet-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:19:05.792677    4838 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:19:05.800006    4838 out.go:177] * Starting "kindnet-138000" primary control-plane node in "kindnet-138000" cluster
	I0924 12:19:05.803931    4838 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:19:05.803945    4838 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:19:05.803956    4838 cache.go:56] Caching tarball of preloaded images
	I0924 12:19:05.804023    4838 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:19:05.804028    4838 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:19:05.804083    4838 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/kindnet-138000/config.json ...
	I0924 12:19:05.804095    4838 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/kindnet-138000/config.json: {Name:mk0f2e2c6841b1ccfa1bdaa2fb2c7647e9f00654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:19:05.804305    4838 start.go:360] acquireMachinesLock for kindnet-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:19:05.804339    4838 start.go:364] duration metric: took 28.584µs to acquireMachinesLock for "kindnet-138000"
	I0924 12:19:05.804352    4838 start.go:93] Provisioning new machine with config: &{Name:kindnet-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:19:05.804384    4838 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:19:05.808930    4838 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:19:05.826219    4838 start.go:159] libmachine.API.Create for "kindnet-138000" (driver="qemu2")
	I0924 12:19:05.826249    4838 client.go:168] LocalClient.Create starting
	I0924 12:19:05.826313    4838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:19:05.826346    4838 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:05.826357    4838 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:05.826394    4838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:19:05.826419    4838 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:05.826430    4838 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:05.826822    4838 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:19:05.990253    4838 main.go:141] libmachine: Creating SSH key...
	I0924 12:19:06.041905    4838 main.go:141] libmachine: Creating Disk image...
	I0924 12:19:06.041911    4838 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:19:06.042137    4838 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2
	I0924 12:19:06.051470    4838 main.go:141] libmachine: STDOUT: 
	I0924 12:19:06.051491    4838 main.go:141] libmachine: STDERR: 
	I0924 12:19:06.051553    4838 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2 +20000M
	I0924 12:19:06.059352    4838 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:19:06.059367    4838 main.go:141] libmachine: STDERR: 
	I0924 12:19:06.059387    4838 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2
	I0924 12:19:06.059394    4838 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:19:06.059409    4838 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:19:06.059433    4838 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:ee:62:e9:16:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2
	I0924 12:19:06.060941    4838 main.go:141] libmachine: STDOUT: 
	I0924 12:19:06.060954    4838 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:19:06.060975    4838 client.go:171] duration metric: took 234.72075ms to LocalClient.Create
	I0924 12:19:08.063171    4838 start.go:128] duration metric: took 2.258769583s to createHost
	I0924 12:19:08.063293    4838 start.go:83] releasing machines lock for "kindnet-138000", held for 2.258947083s
	W0924 12:19:08.063395    4838 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:08.079752    4838 out.go:177] * Deleting "kindnet-138000" in qemu2 ...
	W0924 12:19:08.109943    4838 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:08.109981    4838 start.go:729] Will try again in 5 seconds ...
	I0924 12:19:13.112205    4838 start.go:360] acquireMachinesLock for kindnet-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:19:13.112738    4838 start.go:364] duration metric: took 434.292µs to acquireMachinesLock for "kindnet-138000"
	I0924 12:19:13.112808    4838 start.go:93] Provisioning new machine with config: &{Name:kindnet-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:19:13.113052    4838 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:19:13.122573    4838 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:19:13.167553    4838 start.go:159] libmachine.API.Create for "kindnet-138000" (driver="qemu2")
	I0924 12:19:13.167617    4838 client.go:168] LocalClient.Create starting
	I0924 12:19:13.167736    4838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:19:13.167804    4838 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:13.167825    4838 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:13.167903    4838 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:19:13.167950    4838 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:13.167967    4838 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:13.168508    4838 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:19:13.340196    4838 main.go:141] libmachine: Creating SSH key...
	I0924 12:19:13.397346    4838 main.go:141] libmachine: Creating Disk image...
	I0924 12:19:13.397352    4838 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:19:13.397558    4838 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2
	I0924 12:19:13.406951    4838 main.go:141] libmachine: STDOUT: 
	I0924 12:19:13.406976    4838 main.go:141] libmachine: STDERR: 
	I0924 12:19:13.407046    4838 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2 +20000M
	I0924 12:19:13.414974    4838 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:19:13.414996    4838 main.go:141] libmachine: STDERR: 
	I0924 12:19:13.415011    4838 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2
	I0924 12:19:13.415016    4838 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:19:13.415035    4838 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:19:13.415061    4838 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:fe:3e:33:9d:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kindnet-138000/disk.qcow2
	I0924 12:19:13.416710    4838 main.go:141] libmachine: STDOUT: 
	I0924 12:19:13.416725    4838 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:19:13.416739    4838 client.go:171] duration metric: took 249.11675ms to LocalClient.Create
	I0924 12:19:15.418933    4838 start.go:128] duration metric: took 2.305860292s to createHost
	I0924 12:19:15.419045    4838 start.go:83] releasing machines lock for "kindnet-138000", held for 2.306298083s
	W0924 12:19:15.419497    4838 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:15.436193    4838 out.go:201] 
	W0924 12:19:15.440262    4838 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:19:15.440287    4838 out.go:270] * 
	* 
	W0924 12:19:15.443251    4838 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:19:15.451991    4838 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.814924s)

                                                
                                                
-- stdout --
	* [flannel-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-138000" primary control-plane node in "flannel-138000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-138000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:19:17.764935    4951 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:19:17.765076    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:19:17.765083    4951 out.go:358] Setting ErrFile to fd 2...
	I0924 12:19:17.765086    4951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:19:17.765234    4951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:19:17.766438    4951 out.go:352] Setting JSON to false
	I0924 12:19:17.782851    4951 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4728,"bootTime":1727200829,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:19:17.782946    4951 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:19:17.789967    4951 out.go:177] * [flannel-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:19:17.797950    4951 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:19:17.797980    4951 notify.go:220] Checking for updates...
	I0924 12:19:17.804981    4951 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:19:17.807943    4951 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:19:17.810966    4951 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:19:17.813907    4951 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:19:17.816916    4951 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:19:17.820278    4951 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:19:17.820340    4951 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:19:17.820390    4951 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:19:17.824935    4951 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:19:17.831978    4951 start.go:297] selected driver: qemu2
	I0924 12:19:17.831985    4951 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:19:17.831993    4951 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:19:17.834514    4951 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:19:17.837993    4951 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:19:17.841028    4951 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:19:17.841047    4951 cni.go:84] Creating CNI manager for "flannel"
	I0924 12:19:17.841051    4951 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0924 12:19:17.841075    4951 start.go:340] cluster config:
	{Name:flannel-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:19:17.844732    4951 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:19:17.851951    4951 out.go:177] * Starting "flannel-138000" primary control-plane node in "flannel-138000" cluster
	I0924 12:19:17.855927    4951 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:19:17.855941    4951 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:19:17.855948    4951 cache.go:56] Caching tarball of preloaded images
	I0924 12:19:17.856008    4951 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:19:17.856013    4951 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:19:17.856069    4951 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/flannel-138000/config.json ...
	I0924 12:19:17.856081    4951 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/flannel-138000/config.json: {Name:mk52912e042884344171fcf4e6466f734db389f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:19:17.856313    4951 start.go:360] acquireMachinesLock for flannel-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:19:17.856347    4951 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "flannel-138000"
	I0924 12:19:17.856363    4951 start.go:93] Provisioning new machine with config: &{Name:flannel-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:19:17.856393    4951 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:19:17.864937    4951 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:19:17.881075    4951 start.go:159] libmachine.API.Create for "flannel-138000" (driver="qemu2")
	I0924 12:19:17.881108    4951 client.go:168] LocalClient.Create starting
	I0924 12:19:17.881175    4951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:19:17.881207    4951 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:17.881216    4951 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:17.881255    4951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:19:17.881277    4951 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:17.881285    4951 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:17.881616    4951 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:19:18.044644    4951 main.go:141] libmachine: Creating SSH key...
	I0924 12:19:18.153139    4951 main.go:141] libmachine: Creating Disk image...
	I0924 12:19:18.153146    4951 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:19:18.153383    4951 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2
	I0924 12:19:18.162982    4951 main.go:141] libmachine: STDOUT: 
	I0924 12:19:18.162995    4951 main.go:141] libmachine: STDERR: 
	I0924 12:19:18.163064    4951 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2 +20000M
	I0924 12:19:18.171000    4951 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:19:18.171013    4951 main.go:141] libmachine: STDERR: 
	I0924 12:19:18.171028    4951 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2
	I0924 12:19:18.171033    4951 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:19:18.171046    4951 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:19:18.171074    4951 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:df:de:25:76:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2
	I0924 12:19:18.172692    4951 main.go:141] libmachine: STDOUT: 
	I0924 12:19:18.172704    4951 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:19:18.172727    4951 client.go:171] duration metric: took 291.612791ms to LocalClient.Create
	I0924 12:19:20.174890    4951 start.go:128] duration metric: took 2.318481792s to createHost
	I0924 12:19:20.174981    4951 start.go:83] releasing machines lock for "flannel-138000", held for 2.318642208s
	W0924 12:19:20.175049    4951 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:20.191288    4951 out.go:177] * Deleting "flannel-138000" in qemu2 ...
	W0924 12:19:20.218044    4951 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:20.218068    4951 start.go:729] Will try again in 5 seconds ...
	I0924 12:19:25.220220    4951 start.go:360] acquireMachinesLock for flannel-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:19:25.220491    4951 start.go:364] duration metric: took 219.375µs to acquireMachinesLock for "flannel-138000"
	I0924 12:19:25.220538    4951 start.go:93] Provisioning new machine with config: &{Name:flannel-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:19:25.220627    4951 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:19:25.225873    4951 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:19:25.249142    4951 start.go:159] libmachine.API.Create for "flannel-138000" (driver="qemu2")
	I0924 12:19:25.249174    4951 client.go:168] LocalClient.Create starting
	I0924 12:19:25.249241    4951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:19:25.249291    4951 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:25.249309    4951 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:25.249353    4951 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:19:25.249384    4951 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:25.249392    4951 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:25.249737    4951 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:19:25.415633    4951 main.go:141] libmachine: Creating SSH key...
	I0924 12:19:25.494984    4951 main.go:141] libmachine: Creating Disk image...
	I0924 12:19:25.494993    4951 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:19:25.495212    4951 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2
	I0924 12:19:25.504792    4951 main.go:141] libmachine: STDOUT: 
	I0924 12:19:25.504811    4951 main.go:141] libmachine: STDERR: 
	I0924 12:19:25.504880    4951 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2 +20000M
	I0924 12:19:25.513112    4951 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:19:25.513126    4951 main.go:141] libmachine: STDERR: 
	I0924 12:19:25.513146    4951 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2
	I0924 12:19:25.513151    4951 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:19:25.513159    4951 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:19:25.513185    4951 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:fd:3a:b1:db:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/flannel-138000/disk.qcow2
	I0924 12:19:25.514980    4951 main.go:141] libmachine: STDOUT: 
	I0924 12:19:25.515003    4951 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:19:25.515018    4951 client.go:171] duration metric: took 265.840708ms to LocalClient.Create
	I0924 12:19:27.517076    4951 start.go:128] duration metric: took 2.296455s to createHost
	I0924 12:19:27.517134    4951 start.go:83] releasing machines lock for "flannel-138000", held for 2.296647541s
	W0924 12:19:27.517251    4951 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:27.525492    4951 out.go:201] 
	W0924 12:19:27.532509    4951 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:19:27.532517    4951 out.go:270] * 
	* 
	W0924 12:19:27.533016    4951 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:19:27.543444    4951 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.856020958s)

                                                
                                                
-- stdout --
	* [enable-default-cni-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-138000" primary control-plane node in "enable-default-cni-138000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-138000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:19:29.929373    5071 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:19:29.929520    5071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:19:29.929523    5071 out.go:358] Setting ErrFile to fd 2...
	I0924 12:19:29.929526    5071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:19:29.929659    5071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:19:29.930741    5071 out.go:352] Setting JSON to false
	I0924 12:19:29.947635    5071 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4740,"bootTime":1727200829,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:19:29.947714    5071 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:19:29.954135    5071 out.go:177] * [enable-default-cni-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:19:29.960976    5071 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:19:29.961019    5071 notify.go:220] Checking for updates...
	I0924 12:19:29.967940    5071 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:19:29.970999    5071 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:19:29.974006    5071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:19:29.975420    5071 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:19:29.977971    5071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:19:29.981372    5071 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:19:29.981437    5071 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:19:29.981496    5071 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:19:29.985810    5071 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:19:29.992998    5071 start.go:297] selected driver: qemu2
	I0924 12:19:29.993006    5071 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:19:29.993014    5071 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:19:29.995370    5071 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:19:29.997940    5071 out.go:177] * Automatically selected the socket_vmnet network
	E0924 12:19:30.001138    5071 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0924 12:19:30.001152    5071 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:19:30.001174    5071 cni.go:84] Creating CNI manager for "bridge"
	I0924 12:19:30.001189    5071 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:19:30.001217    5071 start.go:340] cluster config:
	{Name:enable-default-cni-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:19:30.005022    5071 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:19:30.010943    5071 out.go:177] * Starting "enable-default-cni-138000" primary control-plane node in "enable-default-cni-138000" cluster
	I0924 12:19:30.014954    5071 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:19:30.014967    5071 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:19:30.014976    5071 cache.go:56] Caching tarball of preloaded images
	I0924 12:19:30.015029    5071 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:19:30.015035    5071 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:19:30.015077    5071 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/enable-default-cni-138000/config.json ...
	I0924 12:19:30.015088    5071 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/enable-default-cni-138000/config.json: {Name:mkb59a362cfcef04168637ae341c7fd49db7235b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:19:30.015315    5071 start.go:360] acquireMachinesLock for enable-default-cni-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:19:30.015349    5071 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "enable-default-cni-138000"
	I0924 12:19:30.015361    5071 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:19:30.015385    5071 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:19:30.023941    5071 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:19:30.038782    5071 start.go:159] libmachine.API.Create for "enable-default-cni-138000" (driver="qemu2")
	I0924 12:19:30.038812    5071 client.go:168] LocalClient.Create starting
	I0924 12:19:30.038873    5071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:19:30.038905    5071 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:30.038914    5071 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:30.038953    5071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:19:30.038975    5071 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:30.038982    5071 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:30.039353    5071 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:19:30.203286    5071 main.go:141] libmachine: Creating SSH key...
	I0924 12:19:30.352374    5071 main.go:141] libmachine: Creating Disk image...
	I0924 12:19:30.352385    5071 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:19:30.352604    5071 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2
	I0924 12:19:30.362226    5071 main.go:141] libmachine: STDOUT: 
	I0924 12:19:30.362241    5071 main.go:141] libmachine: STDERR: 
	I0924 12:19:30.362300    5071 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2 +20000M
	I0924 12:19:30.370610    5071 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:19:30.370633    5071 main.go:141] libmachine: STDERR: 
	I0924 12:19:30.370648    5071 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2
	I0924 12:19:30.370651    5071 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:19:30.370665    5071 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:19:30.370693    5071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:24:72:06:af:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2
	I0924 12:19:30.372504    5071 main.go:141] libmachine: STDOUT: 
	I0924 12:19:30.372515    5071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:19:30.372546    5071 client.go:171] duration metric: took 333.729459ms to LocalClient.Create
	I0924 12:19:32.374743    5071 start.go:128] duration metric: took 2.359342084s to createHost
	I0924 12:19:32.374846    5071 start.go:83] releasing machines lock for "enable-default-cni-138000", held for 2.359502792s
	W0924 12:19:32.374925    5071 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:32.385418    5071 out.go:177] * Deleting "enable-default-cni-138000" in qemu2 ...
	W0924 12:19:32.421382    5071 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:32.421418    5071 start.go:729] Will try again in 5 seconds ...
	I0924 12:19:37.423657    5071 start.go:360] acquireMachinesLock for enable-default-cni-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:19:37.424190    5071 start.go:364] duration metric: took 411.083µs to acquireMachinesLock for "enable-default-cni-138000"
	I0924 12:19:37.424335    5071 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:19:37.424538    5071 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:19:37.430208    5071 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:19:37.482624    5071 start.go:159] libmachine.API.Create for "enable-default-cni-138000" (driver="qemu2")
	I0924 12:19:37.482680    5071 client.go:168] LocalClient.Create starting
	I0924 12:19:37.482810    5071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:19:37.482887    5071 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:37.482909    5071 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:37.482977    5071 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:19:37.483023    5071 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:37.483036    5071 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:37.483594    5071 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:19:37.660194    5071 main.go:141] libmachine: Creating SSH key...
	I0924 12:19:37.691438    5071 main.go:141] libmachine: Creating Disk image...
	I0924 12:19:37.691445    5071 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:19:37.691655    5071 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2
	I0924 12:19:37.700894    5071 main.go:141] libmachine: STDOUT: 
	I0924 12:19:37.700914    5071 main.go:141] libmachine: STDERR: 
	I0924 12:19:37.700978    5071 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2 +20000M
	I0924 12:19:37.709254    5071 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:19:37.709269    5071 main.go:141] libmachine: STDERR: 
	I0924 12:19:37.709287    5071 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2
	I0924 12:19:37.709293    5071 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:19:37.709305    5071 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:19:37.709336    5071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:91:df:95:19:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/enable-default-cni-138000/disk.qcow2
	I0924 12:19:37.711078    5071 main.go:141] libmachine: STDOUT: 
	I0924 12:19:37.711103    5071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:19:37.711128    5071 client.go:171] duration metric: took 228.433875ms to LocalClient.Create
	I0924 12:19:39.713303    5071 start.go:128] duration metric: took 2.288733333s to createHost
	I0924 12:19:39.713388    5071 start.go:83] releasing machines lock for "enable-default-cni-138000", held for 2.289191667s
	W0924 12:19:39.713721    5071 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:39.726358    5071 out.go:201] 
	W0924 12:19:39.730372    5071 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:19:39.730403    5071 out.go:270] * 
	* 
	W0924 12:19:39.732461    5071 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:19:39.743322    5071 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.944559917s)

                                                
                                                
-- stdout --
	* [bridge-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-138000" primary control-plane node in "bridge-138000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-138000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:19:42.003782    5182 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:19:42.003928    5182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:19:42.003931    5182 out.go:358] Setting ErrFile to fd 2...
	I0924 12:19:42.003933    5182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:19:42.004070    5182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:19:42.005173    5182 out.go:352] Setting JSON to false
	I0924 12:19:42.022018    5182 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4753,"bootTime":1727200829,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:19:42.022084    5182 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:19:42.027631    5182 out.go:177] * [bridge-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:19:42.035568    5182 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:19:42.035629    5182 notify.go:220] Checking for updates...
	I0924 12:19:42.043491    5182 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:19:42.046560    5182 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:19:42.049495    5182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:19:42.052472    5182 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:19:42.055548    5182 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:19:42.058800    5182 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:19:42.058868    5182 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:19:42.058921    5182 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:19:42.063480    5182 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:19:42.070476    5182 start.go:297] selected driver: qemu2
	I0924 12:19:42.070485    5182 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:19:42.070494    5182 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:19:42.072878    5182 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:19:42.075478    5182 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:19:42.078611    5182 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:19:42.078631    5182 cni.go:84] Creating CNI manager for "bridge"
	I0924 12:19:42.078635    5182 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:19:42.078665    5182 start.go:340] cluster config:
	{Name:bridge-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:19:42.082419    5182 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:19:42.089554    5182 out.go:177] * Starting "bridge-138000" primary control-plane node in "bridge-138000" cluster
	I0924 12:19:42.093383    5182 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:19:42.093397    5182 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:19:42.093409    5182 cache.go:56] Caching tarball of preloaded images
	I0924 12:19:42.093470    5182 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:19:42.093476    5182 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:19:42.093541    5182 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/bridge-138000/config.json ...
	I0924 12:19:42.093555    5182 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/bridge-138000/config.json: {Name:mk30a01aba9e66b5725be961efa31a96677a12cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:19:42.093800    5182 start.go:360] acquireMachinesLock for bridge-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:19:42.093837    5182 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "bridge-138000"
	I0924 12:19:42.093851    5182 start.go:93] Provisioning new machine with config: &{Name:bridge-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:19:42.093881    5182 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:19:42.097533    5182 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:19:42.114509    5182 start.go:159] libmachine.API.Create for "bridge-138000" (driver="qemu2")
	I0924 12:19:42.114538    5182 client.go:168] LocalClient.Create starting
	I0924 12:19:42.114612    5182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:19:42.114644    5182 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:42.114654    5182 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:42.114695    5182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:19:42.114717    5182 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:42.114726    5182 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:42.115064    5182 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:19:42.278636    5182 main.go:141] libmachine: Creating SSH key...
	I0924 12:19:42.432504    5182 main.go:141] libmachine: Creating Disk image...
	I0924 12:19:42.432511    5182 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:19:42.432741    5182 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2
	I0924 12:19:42.442309    5182 main.go:141] libmachine: STDOUT: 
	I0924 12:19:42.442324    5182 main.go:141] libmachine: STDERR: 
	I0924 12:19:42.442380    5182 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2 +20000M
	I0924 12:19:42.450391    5182 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:19:42.450404    5182 main.go:141] libmachine: STDERR: 
	I0924 12:19:42.450421    5182 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2
	I0924 12:19:42.450427    5182 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:19:42.450439    5182 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:19:42.450462    5182 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:2b:50:3f:60:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2
	I0924 12:19:42.452063    5182 main.go:141] libmachine: STDOUT: 
	I0924 12:19:42.452076    5182 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:19:42.452095    5182 client.go:171] duration metric: took 337.553042ms to LocalClient.Create
	I0924 12:19:44.454285    5182 start.go:128] duration metric: took 2.360390542s to createHost
	I0924 12:19:44.454381    5182 start.go:83] releasing machines lock for "bridge-138000", held for 2.36054975s
	W0924 12:19:44.454452    5182 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:44.465760    5182 out.go:177] * Deleting "bridge-138000" in qemu2 ...
	W0924 12:19:44.496742    5182 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:44.496774    5182 start.go:729] Will try again in 5 seconds ...
	I0924 12:19:49.498980    5182 start.go:360] acquireMachinesLock for bridge-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:19:49.499469    5182 start.go:364] duration metric: took 383.917µs to acquireMachinesLock for "bridge-138000"
	I0924 12:19:49.499626    5182 start.go:93] Provisioning new machine with config: &{Name:bridge-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:19:49.499843    5182 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:19:49.505411    5182 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:19:49.547431    5182 start.go:159] libmachine.API.Create for "bridge-138000" (driver="qemu2")
	I0924 12:19:49.547485    5182 client.go:168] LocalClient.Create starting
	I0924 12:19:49.547620    5182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:19:49.547685    5182 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:49.547703    5182 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:49.547765    5182 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:19:49.547805    5182 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:49.547819    5182 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:49.548343    5182 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:19:49.719821    5182 main.go:141] libmachine: Creating SSH key...
	I0924 12:19:49.856017    5182 main.go:141] libmachine: Creating Disk image...
	I0924 12:19:49.856025    5182 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:19:49.856263    5182 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2
	I0924 12:19:49.866313    5182 main.go:141] libmachine: STDOUT: 
	I0924 12:19:49.866336    5182 main.go:141] libmachine: STDERR: 
	I0924 12:19:49.866404    5182 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2 +20000M
	I0924 12:19:49.874980    5182 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:19:49.874996    5182 main.go:141] libmachine: STDERR: 
	I0924 12:19:49.875019    5182 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2
	I0924 12:19:49.875027    5182 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:19:49.875036    5182 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:19:49.875082    5182 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:79:3d:bb:8f:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/bridge-138000/disk.qcow2
	I0924 12:19:49.876796    5182 main.go:141] libmachine: STDOUT: 
	I0924 12:19:49.876810    5182 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:19:49.876824    5182 client.go:171] duration metric: took 329.334792ms to LocalClient.Create
	I0924 12:19:51.878935    5182 start.go:128] duration metric: took 2.379079333s to createHost
	I0924 12:19:51.879002    5182 start.go:83] releasing machines lock for "bridge-138000", held for 2.379528166s
	W0924 12:19:51.879276    5182 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:51.889685    5182 out.go:201] 
	W0924 12:19:51.897717    5182 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:19:51.897746    5182 out.go:270] * 
	* 
	W0924 12:19:51.899793    5182 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:19:51.908633    5182 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.777243708s)

                                                
                                                
-- stdout --
	* [kubenet-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-138000" primary control-plane node in "kubenet-138000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-138000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:19:54.143605    5292 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:19:54.143741    5292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:19:54.143744    5292 out.go:358] Setting ErrFile to fd 2...
	I0924 12:19:54.143746    5292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:19:54.143898    5292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:19:54.144976    5292 out.go:352] Setting JSON to false
	I0924 12:19:54.161146    5292 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4765,"bootTime":1727200829,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:19:54.161210    5292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:19:54.166710    5292 out.go:177] * [kubenet-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:19:54.174747    5292 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:19:54.174790    5292 notify.go:220] Checking for updates...
	I0924 12:19:54.181736    5292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:19:54.184728    5292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:19:54.187729    5292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:19:54.190714    5292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:19:54.193738    5292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:19:54.197000    5292 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:19:54.197060    5292 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:19:54.197104    5292 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:19:54.201695    5292 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:19:54.207689    5292 start.go:297] selected driver: qemu2
	I0924 12:19:54.207695    5292 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:19:54.207701    5292 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:19:54.209830    5292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:19:54.212678    5292 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:19:54.215791    5292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:19:54.215807    5292 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0924 12:19:54.215834    5292 start.go:340] cluster config:
	{Name:kubenet-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:19:54.219188    5292 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:19:54.226743    5292 out.go:177] * Starting "kubenet-138000" primary control-plane node in "kubenet-138000" cluster
	I0924 12:19:54.230824    5292 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:19:54.230840    5292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:19:54.230849    5292 cache.go:56] Caching tarball of preloaded images
	I0924 12:19:54.230905    5292 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:19:54.230910    5292 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:19:54.230968    5292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/kubenet-138000/config.json ...
	I0924 12:19:54.230978    5292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/kubenet-138000/config.json: {Name:mkd283c81d0dd9f8203496e0ea572f0440f6c5ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:19:54.231178    5292 start.go:360] acquireMachinesLock for kubenet-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:19:54.231208    5292 start.go:364] duration metric: took 24.416µs to acquireMachinesLock for "kubenet-138000"
	I0924 12:19:54.231219    5292 start.go:93] Provisioning new machine with config: &{Name:kubenet-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:19:54.231243    5292 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:19:54.239740    5292 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:19:54.255084    5292 start.go:159] libmachine.API.Create for "kubenet-138000" (driver="qemu2")
	I0924 12:19:54.255110    5292 client.go:168] LocalClient.Create starting
	I0924 12:19:54.255168    5292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:19:54.255199    5292 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:54.255209    5292 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:54.255246    5292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:19:54.255271    5292 main.go:141] libmachine: Decoding PEM data...
	I0924 12:19:54.255279    5292 main.go:141] libmachine: Parsing certificate...
	I0924 12:19:54.255623    5292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:19:54.419626    5292 main.go:141] libmachine: Creating SSH key...
	I0924 12:19:54.476483    5292 main.go:141] libmachine: Creating Disk image...
	I0924 12:19:54.476489    5292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:19:54.476686    5292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2
	I0924 12:19:54.485723    5292 main.go:141] libmachine: STDOUT: 
	I0924 12:19:54.485744    5292 main.go:141] libmachine: STDERR: 
	I0924 12:19:54.485797    5292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2 +20000M
	I0924 12:19:54.493956    5292 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:19:54.493972    5292 main.go:141] libmachine: STDERR: 
	I0924 12:19:54.493988    5292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2
	I0924 12:19:54.493991    5292 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:19:54.494014    5292 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:19:54.494045    5292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:32:43:37:e1:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2
	I0924 12:19:54.495740    5292 main.go:141] libmachine: STDOUT: 
	I0924 12:19:54.495755    5292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:19:54.495781    5292 client.go:171] duration metric: took 240.665042ms to LocalClient.Create
	I0924 12:19:56.497978    5292 start.go:128] duration metric: took 2.266716708s to createHost
	I0924 12:19:56.498055    5292 start.go:83] releasing machines lock for "kubenet-138000", held for 2.266853583s
	W0924 12:19:56.498148    5292 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:56.505647    5292 out.go:177] * Deleting "kubenet-138000" in qemu2 ...
	W0924 12:19:56.534306    5292 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:19:56.534333    5292 start.go:729] Will try again in 5 seconds ...
	I0924 12:20:01.536591    5292 start.go:360] acquireMachinesLock for kubenet-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:20:01.537207    5292 start.go:364] duration metric: took 476.709µs to acquireMachinesLock for "kubenet-138000"
	I0924 12:20:01.537346    5292 start.go:93] Provisioning new machine with config: &{Name:kubenet-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:20:01.537708    5292 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:20:01.547341    5292 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:20:01.597013    5292 start.go:159] libmachine.API.Create for "kubenet-138000" (driver="qemu2")
	I0924 12:20:01.597084    5292 client.go:168] LocalClient.Create starting
	I0924 12:20:01.597217    5292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:20:01.597299    5292 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:01.597317    5292 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:01.597392    5292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:20:01.597440    5292 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:01.597456    5292 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:01.598116    5292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:20:01.770352    5292 main.go:141] libmachine: Creating SSH key...
	I0924 12:20:01.833188    5292 main.go:141] libmachine: Creating Disk image...
	I0924 12:20:01.833194    5292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:20:01.833405    5292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2
	I0924 12:20:01.842749    5292 main.go:141] libmachine: STDOUT: 
	I0924 12:20:01.842767    5292 main.go:141] libmachine: STDERR: 
	I0924 12:20:01.842819    5292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2 +20000M
	I0924 12:20:01.850895    5292 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:20:01.850910    5292 main.go:141] libmachine: STDERR: 
	I0924 12:20:01.850928    5292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2
	I0924 12:20:01.850937    5292 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:20:01.850947    5292 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:20:01.850974    5292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:27:d1:ca:3c:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/kubenet-138000/disk.qcow2
	I0924 12:20:01.852688    5292 main.go:141] libmachine: STDOUT: 
	I0924 12:20:01.852704    5292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:20:01.852717    5292 client.go:171] duration metric: took 255.627958ms to LocalClient.Create
	I0924 12:20:03.854896    5292 start.go:128] duration metric: took 2.317168041s to createHost
	I0924 12:20:03.854968    5292 start.go:83] releasing machines lock for "kubenet-138000", held for 2.317755208s
	W0924 12:20:03.855296    5292 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:03.867047    5292 out.go:201] 
	W0924 12:20:03.872069    5292 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:20:03.872087    5292 out.go:270] * 
	* 
	W0924 12:20:03.873590    5292 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:20:03.882954    5292 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.914583166s)

                                                
                                                
-- stdout --
	* [custom-flannel-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-138000" primary control-plane node in "custom-flannel-138000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-138000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:20:06.098003    5405 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:20:06.098134    5405 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:20:06.098138    5405 out.go:358] Setting ErrFile to fd 2...
	I0924 12:20:06.098140    5405 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:20:06.098292    5405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:20:06.099431    5405 out.go:352] Setting JSON to false
	I0924 12:20:06.115772    5405 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4777,"bootTime":1727200829,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:20:06.115842    5405 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:20:06.122432    5405 out.go:177] * [custom-flannel-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:20:06.130608    5405 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:20:06.130676    5405 notify.go:220] Checking for updates...
	I0924 12:20:06.137527    5405 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:20:06.140573    5405 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:20:06.143541    5405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:20:06.146541    5405 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:20:06.149562    5405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:20:06.151252    5405 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:20:06.151317    5405 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:20:06.151372    5405 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:20:06.155522    5405 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:20:06.162396    5405 start.go:297] selected driver: qemu2
	I0924 12:20:06.162404    5405 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:20:06.162410    5405 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:20:06.164543    5405 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:20:06.167571    5405 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:20:06.170693    5405 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:20:06.170716    5405 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0924 12:20:06.170732    5405 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0924 12:20:06.170766    5405 start.go:340] cluster config:
	{Name:custom-flannel-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:20:06.174432    5405 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:20:06.181542    5405 out.go:177] * Starting "custom-flannel-138000" primary control-plane node in "custom-flannel-138000" cluster
	I0924 12:20:06.185581    5405 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:20:06.185598    5405 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:20:06.185613    5405 cache.go:56] Caching tarball of preloaded images
	I0924 12:20:06.185685    5405 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:20:06.185697    5405 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:20:06.185749    5405 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/custom-flannel-138000/config.json ...
	I0924 12:20:06.185767    5405 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/custom-flannel-138000/config.json: {Name:mk9097f9fa5257eaf52805bf0be7c8c9fcec64ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:20:06.185980    5405 start.go:360] acquireMachinesLock for custom-flannel-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:20:06.186016    5405 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "custom-flannel-138000"
	I0924 12:20:06.186029    5405 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:20:06.186060    5405 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:20:06.194576    5405 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:20:06.211551    5405 start.go:159] libmachine.API.Create for "custom-flannel-138000" (driver="qemu2")
	I0924 12:20:06.211583    5405 client.go:168] LocalClient.Create starting
	I0924 12:20:06.211653    5405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:20:06.211687    5405 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:06.211697    5405 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:06.211734    5405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:20:06.211756    5405 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:06.211763    5405 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:06.212174    5405 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:20:06.377591    5405 main.go:141] libmachine: Creating SSH key...
	I0924 12:20:06.424913    5405 main.go:141] libmachine: Creating Disk image...
	I0924 12:20:06.424919    5405 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:20:06.425138    5405 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2
	I0924 12:20:06.434312    5405 main.go:141] libmachine: STDOUT: 
	I0924 12:20:06.434328    5405 main.go:141] libmachine: STDERR: 
	I0924 12:20:06.434384    5405 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2 +20000M
	I0924 12:20:06.442201    5405 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:20:06.442218    5405 main.go:141] libmachine: STDERR: 
	I0924 12:20:06.442244    5405 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2
	I0924 12:20:06.442248    5405 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:20:06.442262    5405 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:20:06.442293    5405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:98:9f:02:ab:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2
	I0924 12:20:06.443932    5405 main.go:141] libmachine: STDOUT: 
	I0924 12:20:06.443945    5405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:20:06.443965    5405 client.go:171] duration metric: took 232.376417ms to LocalClient.Create
	I0924 12:20:08.446227    5405 start.go:128] duration metric: took 2.260147875s to createHost
	I0924 12:20:08.446290    5405 start.go:83] releasing machines lock for "custom-flannel-138000", held for 2.260282042s
	W0924 12:20:08.446351    5405 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:08.459370    5405 out.go:177] * Deleting "custom-flannel-138000" in qemu2 ...
	W0924 12:20:08.487933    5405 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:08.487956    5405 start.go:729] Will try again in 5 seconds ...
	I0924 12:20:13.490172    5405 start.go:360] acquireMachinesLock for custom-flannel-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:20:13.490780    5405 start.go:364] duration metric: took 489.042µs to acquireMachinesLock for "custom-flannel-138000"
	I0924 12:20:13.490936    5405 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:20:13.491238    5405 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:20:13.497882    5405 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:20:13.550267    5405 start.go:159] libmachine.API.Create for "custom-flannel-138000" (driver="qemu2")
	I0924 12:20:13.550465    5405 client.go:168] LocalClient.Create starting
	I0924 12:20:13.550654    5405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:20:13.550721    5405 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:13.550737    5405 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:13.550800    5405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:20:13.550846    5405 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:13.550867    5405 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:13.551502    5405 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:20:13.722343    5405 main.go:141] libmachine: Creating SSH key...
	I0924 12:20:13.912828    5405 main.go:141] libmachine: Creating Disk image...
	I0924 12:20:13.912837    5405 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:20:13.913088    5405 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2
	I0924 12:20:13.922738    5405 main.go:141] libmachine: STDOUT: 
	I0924 12:20:13.922759    5405 main.go:141] libmachine: STDERR: 
	I0924 12:20:13.922826    5405 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2 +20000M
	I0924 12:20:13.930786    5405 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:20:13.930803    5405 main.go:141] libmachine: STDERR: 
	I0924 12:20:13.930823    5405 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2
	I0924 12:20:13.930829    5405 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:20:13.930841    5405 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:20:13.930884    5405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:03:b9:9b:da:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/custom-flannel-138000/disk.qcow2
	I0924 12:20:13.932632    5405 main.go:141] libmachine: STDOUT: 
	I0924 12:20:13.932647    5405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:20:13.932660    5405 client.go:171] duration metric: took 382.192083ms to LocalClient.Create
	I0924 12:20:15.934861    5405 start.go:128] duration metric: took 2.443602042s to createHost
	I0924 12:20:15.934941    5405 start.go:83] releasing machines lock for "custom-flannel-138000", held for 2.444154542s
	W0924 12:20:15.935509    5405 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:15.947140    5405 out.go:201] 
	W0924 12:20:15.958301    5405 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:20:15.958355    5405 out.go:270] * 
	* 
	W0924 12:20:15.960969    5405 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:20:15.970056    5405 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.894524625s)

                                                
                                                
-- stdout --
	* [calico-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-138000" primary control-plane node in "calico-138000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-138000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:20:18.403758    5529 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:20:18.403885    5529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:20:18.403889    5529 out.go:358] Setting ErrFile to fd 2...
	I0924 12:20:18.403891    5529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:20:18.404048    5529 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:20:18.405186    5529 out.go:352] Setting JSON to false
	I0924 12:20:18.421759    5529 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4789,"bootTime":1727200829,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:20:18.421830    5529 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:20:18.428449    5529 out.go:177] * [calico-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:20:18.436246    5529 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:20:18.436300    5529 notify.go:220] Checking for updates...
	I0924 12:20:18.444282    5529 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:20:18.447257    5529 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:20:18.451273    5529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:20:18.454238    5529 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:20:18.457216    5529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:20:18.460518    5529 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:20:18.460582    5529 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:20:18.460653    5529 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:20:18.465273    5529 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:20:18.472214    5529 start.go:297] selected driver: qemu2
	I0924 12:20:18.472220    5529 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:20:18.472225    5529 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:20:18.474451    5529 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:20:18.477251    5529 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:20:18.480314    5529 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:20:18.480338    5529 cni.go:84] Creating CNI manager for "calico"
	I0924 12:20:18.480343    5529 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0924 12:20:18.480380    5529 start.go:340] cluster config:
	{Name:calico-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:20:18.483997    5529 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:20:18.491252    5529 out.go:177] * Starting "calico-138000" primary control-plane node in "calico-138000" cluster
	I0924 12:20:18.495196    5529 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:20:18.495210    5529 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:20:18.495218    5529 cache.go:56] Caching tarball of preloaded images
	I0924 12:20:18.495277    5529 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:20:18.495282    5529 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:20:18.495327    5529 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/calico-138000/config.json ...
	I0924 12:20:18.495338    5529 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/calico-138000/config.json: {Name:mk958477667396c250031bc2023ac60c7e69cf52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:20:18.495533    5529 start.go:360] acquireMachinesLock for calico-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:20:18.495564    5529 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "calico-138000"
	I0924 12:20:18.495576    5529 start.go:93] Provisioning new machine with config: &{Name:calico-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:20:18.495601    5529 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:20:18.503218    5529 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:20:18.518997    5529 start.go:159] libmachine.API.Create for "calico-138000" (driver="qemu2")
	I0924 12:20:18.519029    5529 client.go:168] LocalClient.Create starting
	I0924 12:20:18.519093    5529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:20:18.519124    5529 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:18.519134    5529 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:18.519173    5529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:20:18.519197    5529 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:18.519206    5529 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:18.519565    5529 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:20:18.684978    5529 main.go:141] libmachine: Creating SSH key...
	I0924 12:20:18.796597    5529 main.go:141] libmachine: Creating Disk image...
	I0924 12:20:18.796604    5529 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:20:18.796828    5529 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2
	I0924 12:20:18.806337    5529 main.go:141] libmachine: STDOUT: 
	I0924 12:20:18.806352    5529 main.go:141] libmachine: STDERR: 
	I0924 12:20:18.806425    5529 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2 +20000M
	I0924 12:20:18.814746    5529 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:20:18.814758    5529 main.go:141] libmachine: STDERR: 
	I0924 12:20:18.814790    5529 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2
	I0924 12:20:18.814795    5529 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:20:18.814809    5529 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:20:18.814833    5529 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:a1:43:7d:08:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2
	I0924 12:20:18.816551    5529 main.go:141] libmachine: STDOUT: 
	I0924 12:20:18.816565    5529 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:20:18.816584    5529 client.go:171] duration metric: took 297.5465ms to LocalClient.Create
	I0924 12:20:20.818786    5529 start.go:128] duration metric: took 2.32316525s to createHost
	I0924 12:20:20.818882    5529 start.go:83] releasing machines lock for "calico-138000", held for 2.323324625s
	W0924 12:20:20.818959    5529 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:20.828094    5529 out.go:177] * Deleting "calico-138000" in qemu2 ...
	W0924 12:20:20.862107    5529 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:20.862137    5529 start.go:729] Will try again in 5 seconds ...
	I0924 12:20:25.864464    5529 start.go:360] acquireMachinesLock for calico-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:20:25.864912    5529 start.go:364] duration metric: took 370.75µs to acquireMachinesLock for "calico-138000"
	I0924 12:20:25.865008    5529 start.go:93] Provisioning new machine with config: &{Name:calico-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:20:25.865245    5529 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:20:25.880815    5529 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:20:25.925784    5529 start.go:159] libmachine.API.Create for "calico-138000" (driver="qemu2")
	I0924 12:20:25.925841    5529 client.go:168] LocalClient.Create starting
	I0924 12:20:25.925968    5529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:20:25.926046    5529 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:25.926065    5529 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:25.926147    5529 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:20:25.926193    5529 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:25.926205    5529 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:25.926869    5529 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:20:26.097019    5529 main.go:141] libmachine: Creating SSH key...
	I0924 12:20:26.205751    5529 main.go:141] libmachine: Creating Disk image...
	I0924 12:20:26.205757    5529 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:20:26.205985    5529 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2
	I0924 12:20:26.215650    5529 main.go:141] libmachine: STDOUT: 
	I0924 12:20:26.215675    5529 main.go:141] libmachine: STDERR: 
	I0924 12:20:26.215742    5529 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2 +20000M
	I0924 12:20:26.223838    5529 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:20:26.223854    5529 main.go:141] libmachine: STDERR: 
	I0924 12:20:26.223868    5529 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2
	I0924 12:20:26.223871    5529 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:20:26.223880    5529 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:20:26.223906    5529 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:b1:69:f3:61:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/calico-138000/disk.qcow2
	I0924 12:20:26.225619    5529 main.go:141] libmachine: STDOUT: 
	I0924 12:20:26.225634    5529 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:20:26.225647    5529 client.go:171] duration metric: took 299.8ms to LocalClient.Create
	I0924 12:20:28.227721    5529 start.go:128] duration metric: took 2.362447667s to createHost
	I0924 12:20:28.227775    5529 start.go:83] releasing machines lock for "calico-138000", held for 2.36286475s
	W0924 12:20:28.227938    5529 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:28.244372    5529 out.go:201] 
	W0924 12:20:28.247344    5529 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:20:28.247361    5529 out.go:270] * 
	* 
	W0924 12:20:28.248196    5529 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:20:28.258299    5529 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-138000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.897754917s)

                                                
                                                
-- stdout --
	* [false-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-138000" primary control-plane node in "false-138000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-138000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:20:30.690621    5646 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:20:30.690780    5646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:20:30.690785    5646 out.go:358] Setting ErrFile to fd 2...
	I0924 12:20:30.690788    5646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:20:30.690927    5646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:20:30.692046    5646 out.go:352] Setting JSON to false
	I0924 12:20:30.708813    5646 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4801,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:20:30.708906    5646 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:20:30.717311    5646 out.go:177] * [false-138000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:20:30.724413    5646 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:20:30.724455    5646 notify.go:220] Checking for updates...
	I0924 12:20:30.731361    5646 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:20:30.735315    5646 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:20:30.739313    5646 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:20:30.743304    5646 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:20:30.747358    5646 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:20:30.751619    5646 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:20:30.751686    5646 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:20:30.751740    5646 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:20:30.756350    5646 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:20:30.763328    5646 start.go:297] selected driver: qemu2
	I0924 12:20:30.763335    5646 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:20:30.763343    5646 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:20:30.765629    5646 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:20:30.770292    5646 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:20:30.774420    5646 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:20:30.774442    5646 cni.go:84] Creating CNI manager for "false"
	I0924 12:20:30.774473    5646 start.go:340] cluster config:
	{Name:false-138000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:20:30.778203    5646 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:20:30.787217    5646 out.go:177] * Starting "false-138000" primary control-plane node in "false-138000" cluster
	I0924 12:20:30.791341    5646 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:20:30.791375    5646 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:20:30.791387    5646 cache.go:56] Caching tarball of preloaded images
	I0924 12:20:30.791463    5646 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:20:30.791469    5646 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:20:30.791550    5646 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/false-138000/config.json ...
	I0924 12:20:30.791565    5646 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/false-138000/config.json: {Name:mk4b09ebbca1a57e251e341e73568ba949b21f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:20:30.791792    5646 start.go:360] acquireMachinesLock for false-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:20:30.791826    5646 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "false-138000"
	I0924 12:20:30.791839    5646 start.go:93] Provisioning new machine with config: &{Name:false-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:20:30.791878    5646 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:20:30.796288    5646 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:20:30.812884    5646 start.go:159] libmachine.API.Create for "false-138000" (driver="qemu2")
	I0924 12:20:30.812913    5646 client.go:168] LocalClient.Create starting
	I0924 12:20:30.812977    5646 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:20:30.813009    5646 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:30.813019    5646 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:30.813064    5646 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:20:30.813088    5646 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:30.813096    5646 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:30.813468    5646 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:20:30.978531    5646 main.go:141] libmachine: Creating SSH key...
	I0924 12:20:31.102565    5646 main.go:141] libmachine: Creating Disk image...
	I0924 12:20:31.102572    5646 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:20:31.102800    5646 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2
	I0924 12:20:31.112169    5646 main.go:141] libmachine: STDOUT: 
	I0924 12:20:31.112191    5646 main.go:141] libmachine: STDERR: 
	I0924 12:20:31.112253    5646 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2 +20000M
	I0924 12:20:31.120078    5646 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:20:31.120093    5646 main.go:141] libmachine: STDERR: 
	I0924 12:20:31.120116    5646 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2
	I0924 12:20:31.120122    5646 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:20:31.120133    5646 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:20:31.120159    5646 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:55:e4:1b:b0:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2
	I0924 12:20:31.121744    5646 main.go:141] libmachine: STDOUT: 
	I0924 12:20:31.121758    5646 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:20:31.121779    5646 client.go:171] duration metric: took 308.86025ms to LocalClient.Create
	I0924 12:20:33.124051    5646 start.go:128] duration metric: took 2.332163458s to createHost
	I0924 12:20:33.124116    5646 start.go:83] releasing machines lock for "false-138000", held for 2.332297667s
	W0924 12:20:33.124165    5646 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:33.135824    5646 out.go:177] * Deleting "false-138000" in qemu2 ...
	W0924 12:20:33.165668    5646 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:33.165688    5646 start.go:729] Will try again in 5 seconds ...
	I0924 12:20:38.167758    5646 start.go:360] acquireMachinesLock for false-138000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:20:38.167818    5646 start.go:364] duration metric: took 43.875µs to acquireMachinesLock for "false-138000"
	I0924 12:20:38.167829    5646 start.go:93] Provisioning new machine with config: &{Name:false-138000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-138000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:20:38.167881    5646 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:20:38.177147    5646 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0924 12:20:38.193040    5646 start.go:159] libmachine.API.Create for "false-138000" (driver="qemu2")
	I0924 12:20:38.193067    5646 client.go:168] LocalClient.Create starting
	I0924 12:20:38.193134    5646 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:20:38.193168    5646 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:38.193182    5646 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:38.193218    5646 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:20:38.193241    5646 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:38.193248    5646 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:38.193573    5646 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:20:38.420437    5646 main.go:141] libmachine: Creating SSH key...
	I0924 12:20:38.488750    5646 main.go:141] libmachine: Creating Disk image...
	I0924 12:20:38.488758    5646 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:20:38.488962    5646 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2
	I0924 12:20:38.505676    5646 main.go:141] libmachine: STDOUT: 
	I0924 12:20:38.505693    5646 main.go:141] libmachine: STDERR: 
	I0924 12:20:38.505765    5646 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2 +20000M
	I0924 12:20:38.515119    5646 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:20:38.515141    5646 main.go:141] libmachine: STDERR: 
	I0924 12:20:38.515155    5646 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2
	I0924 12:20:38.515160    5646 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:20:38.515171    5646 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:20:38.515211    5646 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:7b:06:ba:1e:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/false-138000/disk.qcow2
	I0924 12:20:38.517307    5646 main.go:141] libmachine: STDOUT: 
	I0924 12:20:38.517323    5646 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:20:38.517337    5646 client.go:171] duration metric: took 324.267ms to LocalClient.Create
	I0924 12:20:40.519594    5646 start.go:128] duration metric: took 2.351694084s to createHost
	I0924 12:20:40.519668    5646 start.go:83] releasing machines lock for "false-138000", held for 2.351857333s
	W0924 12:20:40.519978    5646 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-138000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:40.530681    5646 out.go:201] 
	W0924 12:20:40.534737    5646 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:20:40.534763    5646 out.go:270] * 
	* 
	W0924 12:20:40.537524    5646 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:20:40.545666    5646 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-857000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-857000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.726675583s)

                                                
                                                
-- stdout --
	* [old-k8s-version-857000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-857000" primary control-plane node in "old-k8s-version-857000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-857000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:20:42.764721    5759 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:20:42.764860    5759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:20:42.764867    5759 out.go:358] Setting ErrFile to fd 2...
	I0924 12:20:42.764877    5759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:20:42.765009    5759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:20:42.766100    5759 out.go:352] Setting JSON to false
	I0924 12:20:42.782121    5759 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4813,"bootTime":1727200829,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:20:42.782189    5759 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:20:42.788864    5759 out.go:177] * [old-k8s-version-857000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:20:42.796820    5759 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:20:42.796878    5759 notify.go:220] Checking for updates...
	I0924 12:20:42.801225    5759 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:20:42.804795    5759 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:20:42.807819    5759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:20:42.810802    5759 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:20:42.813716    5759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:20:42.817242    5759 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:20:42.817318    5759 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:20:42.817360    5759 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:20:42.821784    5759 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:20:42.828785    5759 start.go:297] selected driver: qemu2
	I0924 12:20:42.828792    5759 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:20:42.828801    5759 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:20:42.831060    5759 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:20:42.833763    5759 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:20:42.836823    5759 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:20:42.836854    5759 cni.go:84] Creating CNI manager for ""
	I0924 12:20:42.836876    5759 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0924 12:20:42.836903    5759 start.go:340] cluster config:
	{Name:old-k8s-version-857000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-857000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:20:42.840484    5759 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:20:42.847615    5759 out.go:177] * Starting "old-k8s-version-857000" primary control-plane node in "old-k8s-version-857000" cluster
	I0924 12:20:42.851794    5759 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 12:20:42.851810    5759 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0924 12:20:42.851821    5759 cache.go:56] Caching tarball of preloaded images
	I0924 12:20:42.851893    5759 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:20:42.851899    5759 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0924 12:20:42.851964    5759 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/old-k8s-version-857000/config.json ...
	I0924 12:20:42.851976    5759 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/old-k8s-version-857000/config.json: {Name:mkd1ab48040dc97b34defc8edba6a5c9399b8f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:20:42.852187    5759 start.go:360] acquireMachinesLock for old-k8s-version-857000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:20:42.852223    5759 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "old-k8s-version-857000"
	I0924 12:20:42.852236    5759 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-857000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:20:42.852263    5759 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:20:42.860755    5759 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:20:42.877070    5759 start.go:159] libmachine.API.Create for "old-k8s-version-857000" (driver="qemu2")
	I0924 12:20:42.877102    5759 client.go:168] LocalClient.Create starting
	I0924 12:20:42.877164    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:20:42.877195    5759 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:42.877203    5759 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:42.877239    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:20:42.877262    5759 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:42.877276    5759 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:42.877628    5759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:20:43.040987    5759 main.go:141] libmachine: Creating SSH key...
	I0924 12:20:43.085640    5759 main.go:141] libmachine: Creating Disk image...
	I0924 12:20:43.085646    5759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:20:43.085876    5759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2
	I0924 12:20:43.095139    5759 main.go:141] libmachine: STDOUT: 
	I0924 12:20:43.095162    5759 main.go:141] libmachine: STDERR: 
	I0924 12:20:43.095222    5759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2 +20000M
	I0924 12:20:43.103244    5759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:20:43.103260    5759 main.go:141] libmachine: STDERR: 
	I0924 12:20:43.103272    5759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2
	I0924 12:20:43.103276    5759 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:20:43.103291    5759 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:20:43.103315    5759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:fc:ba:3e:e4:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2
	I0924 12:20:43.105128    5759 main.go:141] libmachine: STDOUT: 
	I0924 12:20:43.105142    5759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:20:43.105171    5759 client.go:171] duration metric: took 228.0625ms to LocalClient.Create
	I0924 12:20:45.107355    5759 start.go:128] duration metric: took 2.255083s to createHost
	I0924 12:20:45.107436    5759 start.go:83] releasing machines lock for "old-k8s-version-857000", held for 2.255221375s
	W0924 12:20:45.107491    5759 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:45.116479    5759 out.go:177] * Deleting "old-k8s-version-857000" in qemu2 ...
	W0924 12:20:45.140878    5759 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:45.140898    5759 start.go:729] Will try again in 5 seconds ...
	I0924 12:20:50.141641    5759 start.go:360] acquireMachinesLock for old-k8s-version-857000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:20:50.141915    5759 start.go:364] duration metric: took 218.959µs to acquireMachinesLock for "old-k8s-version-857000"
	I0924 12:20:50.141952    5759 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-857000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:20:50.142094    5759 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:20:50.151388    5759 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:20:50.185452    5759 start.go:159] libmachine.API.Create for "old-k8s-version-857000" (driver="qemu2")
	I0924 12:20:50.185499    5759 client.go:168] LocalClient.Create starting
	I0924 12:20:50.185592    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:20:50.185643    5759 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:50.185659    5759 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:50.185717    5759 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:20:50.185751    5759 main.go:141] libmachine: Decoding PEM data...
	I0924 12:20:50.185761    5759 main.go:141] libmachine: Parsing certificate...
	I0924 12:20:50.186321    5759 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:20:50.356663    5759 main.go:141] libmachine: Creating SSH key...
	I0924 12:20:50.392695    5759 main.go:141] libmachine: Creating Disk image...
	I0924 12:20:50.392700    5759 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:20:50.392920    5759 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2
	I0924 12:20:50.402166    5759 main.go:141] libmachine: STDOUT: 
	I0924 12:20:50.402184    5759 main.go:141] libmachine: STDERR: 
	I0924 12:20:50.402242    5759 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2 +20000M
	I0924 12:20:50.410252    5759 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:20:50.410269    5759 main.go:141] libmachine: STDERR: 
	I0924 12:20:50.410279    5759 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2
	I0924 12:20:50.410283    5759 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:20:50.410292    5759 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:20:50.410320    5759 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:eb:be:8d:70:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2
	I0924 12:20:50.411960    5759 main.go:141] libmachine: STDOUT: 
	I0924 12:20:50.411975    5759 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:20:50.411987    5759 client.go:171] duration metric: took 226.485375ms to LocalClient.Create
	I0924 12:20:52.414293    5759 start.go:128] duration metric: took 2.272076916s to createHost
	I0924 12:20:52.414375    5759 start.go:83] releasing machines lock for "old-k8s-version-857000", held for 2.272460042s
	W0924 12:20:52.414841    5759 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-857000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-857000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:52.429633    5759 out.go:201] 
	W0924 12:20:52.432706    5759 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:20:52.432782    5759 out.go:270] * 
	* 
	W0924 12:20:52.435406    5759 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:20:52.448395    5759 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-857000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (64.627417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-857000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-857000 create -f testdata/busybox.yaml: exit status 1 (29.958584ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-857000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-857000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (30.2395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-857000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (29.603625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-857000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-857000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-857000 describe deploy/metrics-server -n kube-system: exit status 1 (27.322875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-857000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-857000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (32.136375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-857000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-857000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.193755083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-857000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-857000" primary control-plane node in "old-k8s-version-857000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-857000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-857000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:20:56.322695    5811 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:20:56.322841    5811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:20:56.322848    5811 out.go:358] Setting ErrFile to fd 2...
	I0924 12:20:56.322850    5811 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:20:56.322989    5811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:20:56.324029    5811 out.go:352] Setting JSON to false
	I0924 12:20:56.340394    5811 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4827,"bootTime":1727200829,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:20:56.340457    5811 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:20:56.345623    5811 out.go:177] * [old-k8s-version-857000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:20:56.352608    5811 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:20:56.352682    5811 notify.go:220] Checking for updates...
	I0924 12:20:56.360443    5811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:20:56.363586    5811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:20:56.366623    5811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:20:56.369616    5811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:20:56.372562    5811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:20:56.375850    5811 config.go:182] Loaded profile config "old-k8s-version-857000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0924 12:20:56.378586    5811 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 12:20:56.381549    5811 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:20:56.385626    5811 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:20:56.392540    5811 start.go:297] selected driver: qemu2
	I0924 12:20:56.392547    5811 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-857000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:20:56.392614    5811 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:20:56.395184    5811 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:20:56.395214    5811 cni.go:84] Creating CNI manager for ""
	I0924 12:20:56.395234    5811 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0924 12:20:56.395253    5811 start.go:340] cluster config:
	{Name:old-k8s-version-857000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-857000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:20:56.399089    5811 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:20:56.406466    5811 out.go:177] * Starting "old-k8s-version-857000" primary control-plane node in "old-k8s-version-857000" cluster
	I0924 12:20:56.410671    5811 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 12:20:56.410689    5811 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0924 12:20:56.410702    5811 cache.go:56] Caching tarball of preloaded images
	I0924 12:20:56.410765    5811 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:20:56.410771    5811 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0924 12:20:56.410834    5811 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/old-k8s-version-857000/config.json ...
	I0924 12:20:56.411316    5811 start.go:360] acquireMachinesLock for old-k8s-version-857000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:20:56.411345    5811 start.go:364] duration metric: took 22.625µs to acquireMachinesLock for "old-k8s-version-857000"
	I0924 12:20:56.411355    5811 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:20:56.411361    5811 fix.go:54] fixHost starting: 
	I0924 12:20:56.411480    5811 fix.go:112] recreateIfNeeded on old-k8s-version-857000: state=Stopped err=<nil>
	W0924 12:20:56.411492    5811 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:20:56.414589    5811 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-857000" ...
	I0924 12:20:56.422609    5811 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:20:56.422644    5811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:eb:be:8d:70:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2
	I0924 12:20:56.424639    5811 main.go:141] libmachine: STDOUT: 
	I0924 12:20:56.424657    5811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:20:56.424688    5811 fix.go:56] duration metric: took 13.325459ms for fixHost
	I0924 12:20:56.424693    5811 start.go:83] releasing machines lock for "old-k8s-version-857000", held for 13.344083ms
	W0924 12:20:56.424700    5811 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:20:56.424741    5811 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:20:56.424745    5811 start.go:729] Will try again in 5 seconds ...
	I0924 12:21:01.426914    5811 start.go:360] acquireMachinesLock for old-k8s-version-857000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:01.427742    5811 start.go:364] duration metric: took 656.833µs to acquireMachinesLock for "old-k8s-version-857000"
	I0924 12:21:01.427952    5811 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:21:01.427974    5811 fix.go:54] fixHost starting: 
	I0924 12:21:01.428802    5811 fix.go:112] recreateIfNeeded on old-k8s-version-857000: state=Stopped err=<nil>
	W0924 12:21:01.428832    5811 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:21:01.437408    5811 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-857000" ...
	I0924 12:21:01.440460    5811 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:01.440798    5811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:eb:be:8d:70:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/old-k8s-version-857000/disk.qcow2
	I0924 12:21:01.450456    5811 main.go:141] libmachine: STDOUT: 
	I0924 12:21:01.450525    5811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:01.450590    5811 fix.go:56] duration metric: took 22.617791ms for fixHost
	I0924 12:21:01.450607    5811 start.go:83] releasing machines lock for "old-k8s-version-857000", held for 22.798667ms
	W0924 12:21:01.450765    5811 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-857000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-857000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:01.459390    5811 out.go:201] 
	W0924 12:21:01.463470    5811 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:01.463484    5811 out.go:270] * 
	* 
	W0924 12:21:01.465107    5811 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:21:01.474453    5811 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-857000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (59.295709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-857000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (32.793541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-857000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-857000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-857000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.762958ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-857000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-857000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (29.148916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-857000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (30.366708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-857000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-857000 --alsologtostderr -v=1: exit status 83 (45.507459ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-857000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-857000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:01.738685    5831 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:01.739573    5831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:01.739578    5831 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:01.739580    5831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:01.739730    5831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:01.739945    5831 out.go:352] Setting JSON to false
	I0924 12:21:01.739955    5831 mustload.go:65] Loading cluster: old-k8s-version-857000
	I0924 12:21:01.740175    5831 config.go:182] Loaded profile config "old-k8s-version-857000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0924 12:21:01.743899    5831 out.go:177] * The control-plane node old-k8s-version-857000 host is not running: state=Stopped
	I0924 12:21:01.751936    5831 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-857000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-857000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (30.403333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-857000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (32.313666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-118000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-118000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.858635417s)

                                                
                                                
-- stdout --
	* [no-preload-118000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-118000" primary control-plane node in "no-preload-118000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-118000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:02.073215    5850 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:02.073353    5850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:02.073357    5850 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:02.073360    5850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:02.073496    5850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:02.074700    5850 out.go:352] Setting JSON to false
	I0924 12:21:02.091848    5850 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4833,"bootTime":1727200829,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:21:02.091941    5850 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:21:02.095960    5850 out.go:177] * [no-preload-118000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:21:02.102102    5850 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:21:02.102175    5850 notify.go:220] Checking for updates...
	I0924 12:21:02.108067    5850 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:21:02.111077    5850 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:21:02.112244    5850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:21:02.115035    5850 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:21:02.118063    5850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:21:02.121458    5850 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:02.121514    5850 config.go:182] Loaded profile config "stopped-upgrade-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0924 12:21:02.121564    5850 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:21:02.126068    5850 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:21:02.133062    5850 start.go:297] selected driver: qemu2
	I0924 12:21:02.133068    5850 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:21:02.133074    5850 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:21:02.135320    5850 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:21:02.138026    5850 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:21:02.141136    5850 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:21:02.141153    5850 cni.go:84] Creating CNI manager for ""
	I0924 12:21:02.141171    5850 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:21:02.141177    5850 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:21:02.141200    5850 start.go:340] cluster config:
	{Name:no-preload-118000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:02.144702    5850 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:02.152059    5850 out.go:177] * Starting "no-preload-118000" primary control-plane node in "no-preload-118000" cluster
	I0924 12:21:02.156049    5850 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:21:02.156104    5850 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/no-preload-118000/config.json ...
	I0924 12:21:02.156118    5850 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/no-preload-118000/config.json: {Name:mk477b946f9e459a630898567dfda2b30338221e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:21:02.156146    5850 cache.go:107] acquiring lock: {Name:mk945321c85c08e9c9840e1e707ca00e831c4213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:02.156148    5850 cache.go:107] acquiring lock: {Name:mk75f8363eccadc1ed90ec22051a1278540bd35b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:02.156201    5850 cache.go:115] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0924 12:21:02.156208    5850 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 66.208µs
	I0924 12:21:02.156235    5850 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0924 12:21:02.156200    5850 cache.go:107] acquiring lock: {Name:mk874d0bf029e9ec92c2e1ca0c670a222877370d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:02.156276    5850 cache.go:107] acquiring lock: {Name:mk3ec785578d4fcd0a6d518e403f3186cf35936c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:02.156286    5850 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 12:21:02.156284    5850 cache.go:107] acquiring lock: {Name:mk648dd645474219d92ab04ad96100e5f2d6f645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:02.156283    5850 cache.go:107] acquiring lock: {Name:mkfbe44a3b1fd8d0ec8387e199d76873bd515423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:02.156343    5850 start.go:360] acquireMachinesLock for no-preload-118000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:02.156350    5850 cache.go:107] acquiring lock: {Name:mke529f06243198e861510cad929f74c32c063f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:02.156379    5850 start.go:364] duration metric: took 29.209µs to acquireMachinesLock for "no-preload-118000"
	I0924 12:21:02.156367    5850 cache.go:107] acquiring lock: {Name:mkf4dcd25ca4e41e57f8eb97bf9bc406715de281 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:02.156390    5850 start.go:93] Provisioning new machine with config: &{Name:no-preload-118000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:21:02.156417    5850 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:21:02.156456    5850 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 12:21:02.156460    5850 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0924 12:21:02.156605    5850 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 12:21:02.156873    5850 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 12:21:02.156888    5850 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0924 12:21:02.160117    5850 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:21:02.160666    5850 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 12:21:02.163817    5850 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0924 12:21:02.166843    5850 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0924 12:21:02.166913    5850 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0924 12:21:02.167863    5850 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0924 12:21:02.167898    5850 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0924 12:21:02.167918    5850 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0924 12:21:02.168102    5850 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0924 12:21:02.176987    5850 start.go:159] libmachine.API.Create for "no-preload-118000" (driver="qemu2")
	I0924 12:21:02.177012    5850 client.go:168] LocalClient.Create starting
	I0924 12:21:02.177125    5850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:21:02.177154    5850 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:02.177162    5850 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:02.177199    5850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:21:02.177223    5850 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:02.177236    5850 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:02.177616    5850 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:21:02.347102    5850 main.go:141] libmachine: Creating SSH key...
	I0924 12:21:02.420798    5850 main.go:141] libmachine: Creating Disk image...
	I0924 12:21:02.420824    5850 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:21:02.421073    5850 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2
	I0924 12:21:02.430643    5850 main.go:141] libmachine: STDOUT: 
	I0924 12:21:02.430671    5850 main.go:141] libmachine: STDERR: 
	I0924 12:21:02.430749    5850 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2 +20000M
	I0924 12:21:02.439664    5850 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:21:02.439688    5850 main.go:141] libmachine: STDERR: 
	I0924 12:21:02.439701    5850 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2
	I0924 12:21:02.439706    5850 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:21:02.439722    5850 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:02.439750    5850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:2e:e9:7f:ee:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2
	I0924 12:21:02.441588    5850 main.go:141] libmachine: STDOUT: 
	I0924 12:21:02.441603    5850 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:02.441625    5850 client.go:171] duration metric: took 264.607792ms to LocalClient.Create
	I0924 12:21:02.528745    5850 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0924 12:21:02.542897    5850 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0924 12:21:02.565794    5850 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0924 12:21:02.583255    5850 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0924 12:21:02.619206    5850 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0924 12:21:02.625555    5850 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0924 12:21:02.671998    5850 cache.go:162] opening:  /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0924 12:21:02.738377    5850 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0924 12:21:02.738395    5850 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 582.066291ms
	I0924 12:21:02.738403    5850 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0924 12:21:04.441779    5850 start.go:128] duration metric: took 2.285366125s to createHost
	I0924 12:21:04.441802    5850 start.go:83] releasing machines lock for "no-preload-118000", held for 2.285434s
	W0924 12:21:04.441831    5850 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:04.450728    5850 out.go:177] * Deleting "no-preload-118000" in qemu2 ...
	W0924 12:21:04.472048    5850 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:04.472063    5850 start.go:729] Will try again in 5 seconds ...
	I0924 12:21:05.669994    5850 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0924 12:21:05.670015    5850 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 3.5138995s
	I0924 12:21:05.670024    5850 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0924 12:21:05.745953    5850 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0924 12:21:05.745964    5850 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.589733084s
	I0924 12:21:05.745969    5850 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0924 12:21:06.206327    5850 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0924 12:21:06.206343    5850 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.050099833s
	I0924 12:21:06.206351    5850 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0924 12:21:06.495047    5850 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0924 12:21:06.495067    5850 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.338921792s
	I0924 12:21:06.495080    5850 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0924 12:21:07.640535    5850 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0924 12:21:07.640584    5850 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 5.484405333s
	I0924 12:21:07.640611    5850 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0924 12:21:08.768757    5850 cache.go:157] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0924 12:21:08.768803    5850 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 6.612569875s
	I0924 12:21:08.768822    5850 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0924 12:21:08.768867    5850 cache.go:87] Successfully saved all images to host disk.
	I0924 12:21:09.474220    5850 start.go:360] acquireMachinesLock for no-preload-118000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:09.474714    5850 start.go:364] duration metric: took 379.375µs to acquireMachinesLock for "no-preload-118000"
	I0924 12:21:09.474839    5850 start.go:93] Provisioning new machine with config: &{Name:no-preload-118000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:21:09.475034    5850 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:21:09.487606    5850 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:21:09.537100    5850 start.go:159] libmachine.API.Create for "no-preload-118000" (driver="qemu2")
	I0924 12:21:09.537158    5850 client.go:168] LocalClient.Create starting
	I0924 12:21:09.537282    5850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:21:09.537354    5850 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:09.537377    5850 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:09.537446    5850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:21:09.537491    5850 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:09.537510    5850 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:09.538062    5850 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:21:09.715830    5850 main.go:141] libmachine: Creating SSH key...
	I0924 12:21:09.841230    5850 main.go:141] libmachine: Creating Disk image...
	I0924 12:21:09.841243    5850 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:21:09.841495    5850 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2
	I0924 12:21:09.851099    5850 main.go:141] libmachine: STDOUT: 
	I0924 12:21:09.851127    5850 main.go:141] libmachine: STDERR: 
	I0924 12:21:09.851203    5850 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2 +20000M
	I0924 12:21:09.859675    5850 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:21:09.859697    5850 main.go:141] libmachine: STDERR: 
	I0924 12:21:09.859717    5850 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2
	I0924 12:21:09.859721    5850 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:21:09.859733    5850 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:09.859776    5850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:1c:67:81:f0:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2
	I0924 12:21:09.861618    5850 main.go:141] libmachine: STDOUT: 
	I0924 12:21:09.861632    5850 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:09.861646    5850 client.go:171] duration metric: took 324.483625ms to LocalClient.Create
	I0924 12:21:11.863874    5850 start.go:128] duration metric: took 2.388826167s to createHost
	I0924 12:21:11.863935    5850 start.go:83] releasing machines lock for "no-preload-118000", held for 2.389216208s
	W0924 12:21:11.864250    5850 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-118000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-118000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:11.876790    5850 out.go:201] 
	W0924 12:21:11.880924    5850 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:11.880956    5850 out.go:270] * 
	* 
	W0924 12:21:11.883027    5850 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:21:11.891648    5850 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-118000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (52.014417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-118000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-768000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-768000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.570914708s)

                                                
                                                
-- stdout --
	* [embed-certs-768000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-768000" primary control-plane node in "embed-certs-768000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-768000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:11.024090    5899 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:11.024220    5899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:11.024223    5899 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:11.024226    5899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:11.024341    5899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:11.025427    5899 out.go:352] Setting JSON to false
	I0924 12:21:11.041758    5899 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4842,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:21:11.041829    5899 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:21:11.046894    5899 out.go:177] * [embed-certs-768000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:21:11.055881    5899 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:21:11.055882    5899 notify.go:220] Checking for updates...
	I0924 12:21:11.062839    5899 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:21:11.065773    5899 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:21:11.068841    5899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:21:11.071868    5899 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:21:11.074813    5899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:21:11.078185    5899 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:11.078256    5899 config.go:182] Loaded profile config "no-preload-118000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:11.078307    5899 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:21:11.082750    5899 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:21:11.089842    5899 start.go:297] selected driver: qemu2
	I0924 12:21:11.089850    5899 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:21:11.089859    5899 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:21:11.092280    5899 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:21:11.094774    5899 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:21:11.097845    5899 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:21:11.097864    5899 cni.go:84] Creating CNI manager for ""
	I0924 12:21:11.097895    5899 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:21:11.097900    5899 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:21:11.097938    5899 start.go:340] cluster config:
	{Name:embed-certs-768000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-768000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:11.101654    5899 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:11.108808    5899 out.go:177] * Starting "embed-certs-768000" primary control-plane node in "embed-certs-768000" cluster
	I0924 12:21:11.112808    5899 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:21:11.112825    5899 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:21:11.112836    5899 cache.go:56] Caching tarball of preloaded images
	I0924 12:21:11.112905    5899 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:21:11.112911    5899 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:21:11.112974    5899 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/embed-certs-768000/config.json ...
	I0924 12:21:11.112986    5899 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/embed-certs-768000/config.json: {Name:mk458acbe8396da7a702a48779f9e868edb419ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:21:11.113217    5899 start.go:360] acquireMachinesLock for embed-certs-768000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:11.864041    5899 start.go:364] duration metric: took 750.806416ms to acquireMachinesLock for "embed-certs-768000"
	I0924 12:21:11.864272    5899 start.go:93] Provisioning new machine with config: &{Name:embed-certs-768000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-768000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:21:11.864501    5899 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:21:11.873879    5899 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:21:11.923540    5899 start.go:159] libmachine.API.Create for "embed-certs-768000" (driver="qemu2")
	I0924 12:21:11.923597    5899 client.go:168] LocalClient.Create starting
	I0924 12:21:11.923704    5899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:21:11.923766    5899 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:11.923786    5899 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:11.923846    5899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:21:11.923893    5899 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:11.923910    5899 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:11.924555    5899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:21:12.097228    5899 main.go:141] libmachine: Creating SSH key...
	I0924 12:21:12.128225    5899 main.go:141] libmachine: Creating Disk image...
	I0924 12:21:12.128230    5899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:21:12.128407    5899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2
	I0924 12:21:12.137731    5899 main.go:141] libmachine: STDOUT: 
	I0924 12:21:12.137752    5899 main.go:141] libmachine: STDERR: 
	I0924 12:21:12.137806    5899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2 +20000M
	I0924 12:21:12.146441    5899 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:21:12.146461    5899 main.go:141] libmachine: STDERR: 
	I0924 12:21:12.146476    5899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2
	I0924 12:21:12.146480    5899 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:21:12.146495    5899 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:12.146528    5899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f0:38:db:0b:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2
	I0924 12:21:12.148284    5899 main.go:141] libmachine: STDOUT: 
	I0924 12:21:12.148298    5899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:12.148318    5899 client.go:171] duration metric: took 224.714375ms to LocalClient.Create
	I0924 12:21:14.150494    5899 start.go:128] duration metric: took 2.285974208s to createHost
	I0924 12:21:14.150598    5899 start.go:83] releasing machines lock for "embed-certs-768000", held for 2.286495542s
	W0924 12:21:14.150660    5899 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:14.159893    5899 out.go:177] * Deleting "embed-certs-768000" in qemu2 ...
	W0924 12:21:14.187435    5899 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:14.187460    5899 start.go:729] Will try again in 5 seconds ...
	I0924 12:21:19.189720    5899 start.go:360] acquireMachinesLock for embed-certs-768000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:19.190182    5899 start.go:364] duration metric: took 331µs to acquireMachinesLock for "embed-certs-768000"
	I0924 12:21:19.190315    5899 start.go:93] Provisioning new machine with config: &{Name:embed-certs-768000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-768000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:21:19.190641    5899 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:21:19.199217    5899 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:21:19.249509    5899 start.go:159] libmachine.API.Create for "embed-certs-768000" (driver="qemu2")
	I0924 12:21:19.249554    5899 client.go:168] LocalClient.Create starting
	I0924 12:21:19.249666    5899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:21:19.249737    5899 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:19.249755    5899 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:19.249833    5899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:21:19.249876    5899 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:19.249888    5899 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:19.250507    5899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:21:19.430805    5899 main.go:141] libmachine: Creating SSH key...
	I0924 12:21:19.494721    5899 main.go:141] libmachine: Creating Disk image...
	I0924 12:21:19.494727    5899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:21:19.494949    5899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2
	I0924 12:21:19.504341    5899 main.go:141] libmachine: STDOUT: 
	I0924 12:21:19.504358    5899 main.go:141] libmachine: STDERR: 
	I0924 12:21:19.504418    5899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2 +20000M
	I0924 12:21:19.512165    5899 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:21:19.512179    5899 main.go:141] libmachine: STDERR: 
	I0924 12:21:19.512189    5899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2
	I0924 12:21:19.512194    5899 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:21:19.512202    5899 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:19.512234    5899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f4:2e:07:50:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2
	I0924 12:21:19.513805    5899 main.go:141] libmachine: STDOUT: 
	I0924 12:21:19.513820    5899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:19.513833    5899 client.go:171] duration metric: took 264.275208ms to LocalClient.Create
	I0924 12:21:21.516027    5899 start.go:128] duration metric: took 2.325371459s to createHost
	I0924 12:21:21.516086    5899 start.go:83] releasing machines lock for "embed-certs-768000", held for 2.32589675s
	W0924 12:21:21.516394    5899 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-768000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-768000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:21.528915    5899 out.go:201] 
	W0924 12:21:21.539035    5899 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:21.539059    5899 out.go:270] * 
	* 
	W0924 12:21:21.541632    5899 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:21:21.549865    5899 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-768000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (66.196542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-768000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-118000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-118000 create -f testdata/busybox.yaml: exit status 1 (31.01825ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-118000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-118000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (34.098833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-118000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (33.915917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-118000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-118000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-118000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-118000 describe deploy/metrics-server -n kube-system: exit status 1 (26.471375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-118000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-118000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (29.645834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-118000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-118000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-118000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.209196917s)

                                                
                                                
-- stdout --
	* [no-preload-118000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-118000" primary control-plane node in "no-preload-118000" cluster
	* Restarting existing qemu2 VM for "no-preload-118000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-118000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:14.414987    5939 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:14.415103    5939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:14.415106    5939 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:14.415108    5939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:14.415246    5939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:14.416302    5939 out.go:352] Setting JSON to false
	I0924 12:21:14.432405    5939 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4845,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:21:14.432481    5939 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:21:14.437850    5939 out.go:177] * [no-preload-118000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:21:14.444849    5939 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:21:14.444891    5939 notify.go:220] Checking for updates...
	I0924 12:21:14.452630    5939 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:21:14.455797    5939 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:21:14.459825    5939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:21:14.461179    5939 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:21:14.464811    5939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:21:14.468035    5939 config.go:182] Loaded profile config "no-preload-118000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:14.468295    5939 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:21:14.472673    5939 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:21:14.479782    5939 start.go:297] selected driver: qemu2
	I0924 12:21:14.479788    5939 start.go:901] validating driver "qemu2" against &{Name:no-preload-118000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:14.479838    5939 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:21:14.482082    5939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:21:14.482109    5939 cni.go:84] Creating CNI manager for ""
	I0924 12:21:14.482130    5939 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:21:14.482160    5939 start.go:340] cluster config:
	{Name:no-preload-118000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-118000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:14.485775    5939 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:14.493825    5939 out.go:177] * Starting "no-preload-118000" primary control-plane node in "no-preload-118000" cluster
	I0924 12:21:14.497842    5939 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:21:14.497906    5939 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/no-preload-118000/config.json ...
	I0924 12:21:14.497959    5939 cache.go:107] acquiring lock: {Name:mk945321c85c08e9c9840e1e707ca00e831c4213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:14.497978    5939 cache.go:107] acquiring lock: {Name:mke529f06243198e861510cad929f74c32c063f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:14.498029    5939 cache.go:115] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0924 12:21:14.498026    5939 cache.go:107] acquiring lock: {Name:mk3ec785578d4fcd0a6d518e403f3186cf35936c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:14.498035    5939 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 79.041µs
	I0924 12:21:14.498042    5939 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0924 12:21:14.498044    5939 cache.go:115] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0924 12:21:14.498049    5939 cache.go:107] acquiring lock: {Name:mkfbe44a3b1fd8d0ec8387e199d76873bd515423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:14.498061    5939 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 82.833µs
	I0924 12:21:14.498067    5939 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0924 12:21:14.497960    5939 cache.go:107] acquiring lock: {Name:mk75f8363eccadc1ed90ec22051a1278540bd35b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:14.498081    5939 cache.go:107] acquiring lock: {Name:mk874d0bf029e9ec92c2e1ca0c670a222877370d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:14.498091    5939 cache.go:115] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0924 12:21:14.498096    5939 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 47.458µs
	I0924 12:21:14.498100    5939 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0924 12:21:14.498111    5939 cache.go:115] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0924 12:21:14.498116    5939 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 174.792µs
	I0924 12:21:14.498123    5939 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0924 12:21:14.498132    5939 cache.go:107] acquiring lock: {Name:mkf4dcd25ca4e41e57f8eb97bf9bc406715de281 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:14.498095    5939 cache.go:107] acquiring lock: {Name:mk648dd645474219d92ab04ad96100e5f2d6f645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:14.498204    5939 cache.go:115] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0924 12:21:14.498209    5939 cache.go:115] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0924 12:21:14.498212    5939 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 223.5µs
	I0924 12:21:14.498216    5939 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0924 12:21:14.498216    5939 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 147.417µs
	I0924 12:21:14.498222    5939 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0924 12:21:14.498219    5939 cache.go:115] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0924 12:21:14.498228    5939 cache.go:115] /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0924 12:21:14.498229    5939 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 116µs
	I0924 12:21:14.498233    5939 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0924 12:21:14.498234    5939 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 161.708µs
	I0924 12:21:14.498238    5939 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0924 12:21:14.498247    5939 cache.go:87] Successfully saved all images to host disk.
	I0924 12:21:14.498355    5939 start.go:360] acquireMachinesLock for no-preload-118000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:14.498393    5939 start.go:364] duration metric: took 32.167µs to acquireMachinesLock for "no-preload-118000"
	I0924 12:21:14.498404    5939 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:21:14.498408    5939 fix.go:54] fixHost starting: 
	I0924 12:21:14.498526    5939 fix.go:112] recreateIfNeeded on no-preload-118000: state=Stopped err=<nil>
	W0924 12:21:14.498534    5939 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:21:14.506889    5939 out.go:177] * Restarting existing qemu2 VM for "no-preload-118000" ...
	I0924 12:21:14.510785    5939 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:14.510818    5939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:1c:67:81:f0:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2
	I0924 12:21:14.512806    5939 main.go:141] libmachine: STDOUT: 
	I0924 12:21:14.512831    5939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:14.512860    5939 fix.go:56] duration metric: took 14.44975ms for fixHost
	I0924 12:21:14.512864    5939 start.go:83] releasing machines lock for "no-preload-118000", held for 14.466458ms
	W0924 12:21:14.512869    5939 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:14.512909    5939 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:14.512913    5939 start.go:729] Will try again in 5 seconds ...
	I0924 12:21:19.514966    5939 start.go:360] acquireMachinesLock for no-preload-118000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:21.516266    5939 start.go:364] duration metric: took 2.001206667s to acquireMachinesLock for "no-preload-118000"
	I0924 12:21:21.516426    5939 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:21:21.516442    5939 fix.go:54] fixHost starting: 
	I0924 12:21:21.517220    5939 fix.go:112] recreateIfNeeded on no-preload-118000: state=Stopped err=<nil>
	W0924 12:21:21.517295    5939 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:21:21.535920    5939 out.go:177] * Restarting existing qemu2 VM for "no-preload-118000" ...
	I0924 12:21:21.542921    5939 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:21.543132    5939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:1c:67:81:f0:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/no-preload-118000/disk.qcow2
	I0924 12:21:21.552560    5939 main.go:141] libmachine: STDOUT: 
	I0924 12:21:21.552629    5939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:21.552718    5939 fix.go:56] duration metric: took 36.274583ms for fixHost
	I0924 12:21:21.552742    5939 start.go:83] releasing machines lock for "no-preload-118000", held for 36.434292ms
	W0924 12:21:21.552948    5939 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-118000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-118000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:21.569096    5939 out.go:201] 
	W0924 12:21:21.574313    5939 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:21.574345    5939 out.go:270] * 
	* 
	W0924 12:21:21.576170    5939 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:21:21.586940    5939 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-118000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (48.513875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-118000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (7.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-768000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-768000 create -f testdata/busybox.yaml: exit status 1 (32.313166ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-768000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-768000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (30.86725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-768000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (34.823875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-768000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-118000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (33.827916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-118000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-118000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-118000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-118000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.395792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-118000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-118000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (31.675375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-118000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-768000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-768000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-768000 describe deploy/metrics-server -n kube-system: exit status 1 (28.812167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-768000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-768000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (32.5545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-768000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-118000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (33.050709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-118000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-118000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-118000 --alsologtostderr -v=1: exit status 83 (41.617458ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-118000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-118000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:21.857665    5977 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:21.857805    5977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:21.857808    5977 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:21.857811    5977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:21.857965    5977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:21.858198    5977 out.go:352] Setting JSON to false
	I0924 12:21:21.858209    5977 mustload.go:65] Loading cluster: no-preload-118000
	I0924 12:21:21.858424    5977 config.go:182] Loaded profile config "no-preload-118000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:21.861929    5977 out.go:177] * The control-plane node no-preload-118000 host is not running: state=Stopped
	I0924 12:21:21.864809    5977 out.go:177]   To start a cluster, run: "minikube start -p no-preload-118000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-118000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (29.487584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-118000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (28.478792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-118000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-916000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-916000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.891517958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-916000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-916000" primary control-plane node in "default-k8s-diff-port-916000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-916000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:22.287025    6008 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:22.287165    6008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:22.287168    6008 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:22.287170    6008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:22.287294    6008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:22.288389    6008 out.go:352] Setting JSON to false
	I0924 12:21:22.304570    6008 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4853,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:21:22.304679    6008 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:21:22.309918    6008 out.go:177] * [default-k8s-diff-port-916000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:21:22.317967    6008 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:21:22.317999    6008 notify.go:220] Checking for updates...
	I0924 12:21:22.323871    6008 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:21:22.326855    6008 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:21:22.329857    6008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:21:22.332865    6008 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:21:22.335915    6008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:21:22.339338    6008 config.go:182] Loaded profile config "embed-certs-768000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:22.339400    6008 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:22.339449    6008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:21:22.343991    6008 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:21:22.350875    6008 start.go:297] selected driver: qemu2
	I0924 12:21:22.350882    6008 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:21:22.350889    6008 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:21:22.353232    6008 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 12:21:22.356819    6008 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:21:22.360024    6008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:21:22.360051    6008 cni.go:84] Creating CNI manager for ""
	I0924 12:21:22.360082    6008 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:21:22.360086    6008 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:21:22.360126    6008 start.go:340] cluster config:
	{Name:default-k8s-diff-port-916000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:22.363849    6008 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:22.370900    6008 out.go:177] * Starting "default-k8s-diff-port-916000" primary control-plane node in "default-k8s-diff-port-916000" cluster
	I0924 12:21:22.374870    6008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:21:22.374883    6008 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:21:22.374890    6008 cache.go:56] Caching tarball of preloaded images
	I0924 12:21:22.374940    6008 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:21:22.374945    6008 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:21:22.375004    6008 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/default-k8s-diff-port-916000/config.json ...
	I0924 12:21:22.375015    6008 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/default-k8s-diff-port-916000/config.json: {Name:mk1275b35fde11e2ba4f8edc69eeb9a2efd04013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:21:22.375219    6008 start.go:360] acquireMachinesLock for default-k8s-diff-port-916000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:22.375256    6008 start.go:364] duration metric: took 27.042µs to acquireMachinesLock for "default-k8s-diff-port-916000"
	I0924 12:21:22.375270    6008 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:21:22.375302    6008 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:21:22.382911    6008 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:21:22.399985    6008 start.go:159] libmachine.API.Create for "default-k8s-diff-port-916000" (driver="qemu2")
	I0924 12:21:22.400013    6008 client.go:168] LocalClient.Create starting
	I0924 12:21:22.400082    6008 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:21:22.400115    6008 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:22.400124    6008 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:22.400174    6008 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:21:22.400196    6008 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:22.400202    6008 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:22.400562    6008 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:21:22.562929    6008 main.go:141] libmachine: Creating SSH key...
	I0924 12:21:22.683785    6008 main.go:141] libmachine: Creating Disk image...
	I0924 12:21:22.683794    6008 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:21:22.684006    6008 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2
	I0924 12:21:22.693376    6008 main.go:141] libmachine: STDOUT: 
	I0924 12:21:22.693393    6008 main.go:141] libmachine: STDERR: 
	I0924 12:21:22.693454    6008 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2 +20000M
	I0924 12:21:22.701236    6008 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:21:22.701249    6008 main.go:141] libmachine: STDERR: 
	I0924 12:21:22.701262    6008 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2
	I0924 12:21:22.701266    6008 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:21:22.701279    6008 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:22.701304    6008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:ea:e0:09:4c:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2
	I0924 12:21:22.702900    6008 main.go:141] libmachine: STDOUT: 
	I0924 12:21:22.702914    6008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:22.702935    6008 client.go:171] duration metric: took 302.917167ms to LocalClient.Create
	I0924 12:21:24.705116    6008 start.go:128] duration metric: took 2.329802875s to createHost
	I0924 12:21:24.705176    6008 start.go:83] releasing machines lock for "default-k8s-diff-port-916000", held for 2.3299265s
	W0924 12:21:24.705258    6008 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:24.719702    6008 out.go:177] * Deleting "default-k8s-diff-port-916000" in qemu2 ...
	W0924 12:21:24.758301    6008 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:24.758338    6008 start.go:729] Will try again in 5 seconds ...
	I0924 12:21:29.760544    6008 start.go:360] acquireMachinesLock for default-k8s-diff-port-916000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:29.761073    6008 start.go:364] duration metric: took 395.916µs to acquireMachinesLock for "default-k8s-diff-port-916000"
	I0924 12:21:29.761200    6008 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:21:29.761518    6008 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:21:29.770074    6008 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:21:29.819833    6008 start.go:159] libmachine.API.Create for "default-k8s-diff-port-916000" (driver="qemu2")
	I0924 12:21:29.819898    6008 client.go:168] LocalClient.Create starting
	I0924 12:21:29.820025    6008 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:21:29.820095    6008 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:29.820113    6008 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:29.820172    6008 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:21:29.820227    6008 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:29.820238    6008 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:29.820851    6008 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:21:29.996627    6008 main.go:141] libmachine: Creating SSH key...
	I0924 12:21:30.072351    6008 main.go:141] libmachine: Creating Disk image...
	I0924 12:21:30.072356    6008 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:21:30.072547    6008 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2
	I0924 12:21:30.081714    6008 main.go:141] libmachine: STDOUT: 
	I0924 12:21:30.081737    6008 main.go:141] libmachine: STDERR: 
	I0924 12:21:30.081806    6008 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2 +20000M
	I0924 12:21:30.089772    6008 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:21:30.089787    6008 main.go:141] libmachine: STDERR: 
	I0924 12:21:30.089802    6008 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2
	I0924 12:21:30.089807    6008 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:21:30.089818    6008 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:30.089848    6008 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:e3:55:8d:5f:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2
	I0924 12:21:30.091526    6008 main.go:141] libmachine: STDOUT: 
	I0924 12:21:30.091567    6008 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:30.091594    6008 client.go:171] duration metric: took 271.693375ms to LocalClient.Create
	I0924 12:21:32.093754    6008 start.go:128] duration metric: took 2.332226s to createHost
	I0924 12:21:32.093811    6008 start.go:83] releasing machines lock for "default-k8s-diff-port-916000", held for 2.332725667s
	W0924 12:21:32.094189    6008 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-916000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-916000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:32.113766    6008 out.go:201] 
	W0924 12:21:32.120752    6008 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:32.120816    6008 out.go:270] * 
	* 
	W0924 12:21:32.123567    6008 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:21:32.136784    6008 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-916000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (66.888458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-768000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-768000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.807175583s)

                                                
                                                
-- stdout --
	* [embed-certs-768000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-768000" primary control-plane node in "embed-certs-768000" cluster
	* Restarting existing qemu2 VM for "embed-certs-768000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-768000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:25.395934    6036 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:25.396073    6036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:25.396077    6036 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:25.396079    6036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:25.396222    6036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:25.397231    6036 out.go:352] Setting JSON to false
	I0924 12:21:25.413346    6036 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4856,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:21:25.413423    6036 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:21:25.417672    6036 out.go:177] * [embed-certs-768000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:21:25.425615    6036 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:21:25.425657    6036 notify.go:220] Checking for updates...
	I0924 12:21:25.433573    6036 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:21:25.436640    6036 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:21:25.439581    6036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:21:25.442612    6036 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:21:25.445603    6036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:21:25.448818    6036 config.go:182] Loaded profile config "embed-certs-768000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:25.449072    6036 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:21:25.453538    6036 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:21:25.459519    6036 start.go:297] selected driver: qemu2
	I0924 12:21:25.459527    6036 start.go:901] validating driver "qemu2" against &{Name:embed-certs-768000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-768000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:25.459590    6036 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:21:25.461929    6036 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:21:25.461956    6036 cni.go:84] Creating CNI manager for ""
	I0924 12:21:25.461976    6036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:21:25.462002    6036 start.go:340] cluster config:
	{Name:embed-certs-768000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-768000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:25.465608    6036 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:25.474605    6036 out.go:177] * Starting "embed-certs-768000" primary control-plane node in "embed-certs-768000" cluster
	I0924 12:21:25.478572    6036 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:21:25.478588    6036 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:21:25.478598    6036 cache.go:56] Caching tarball of preloaded images
	I0924 12:21:25.478665    6036 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:21:25.478671    6036 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:21:25.478748    6036 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/embed-certs-768000/config.json ...
	I0924 12:21:25.479228    6036 start.go:360] acquireMachinesLock for embed-certs-768000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:25.479262    6036 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "embed-certs-768000"
	I0924 12:21:25.479272    6036 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:21:25.479277    6036 fix.go:54] fixHost starting: 
	I0924 12:21:25.479402    6036 fix.go:112] recreateIfNeeded on embed-certs-768000: state=Stopped err=<nil>
	W0924 12:21:25.479413    6036 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:21:25.483626    6036 out.go:177] * Restarting existing qemu2 VM for "embed-certs-768000" ...
	I0924 12:21:25.491562    6036 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:25.491596    6036 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f4:2e:07:50:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2
	I0924 12:21:25.493642    6036 main.go:141] libmachine: STDOUT: 
	I0924 12:21:25.493664    6036 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:25.493696    6036 fix.go:56] duration metric: took 14.417666ms for fixHost
	I0924 12:21:25.493702    6036 start.go:83] releasing machines lock for "embed-certs-768000", held for 14.435709ms
	W0924 12:21:25.493709    6036 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:25.493747    6036 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:25.493752    6036 start.go:729] Will try again in 5 seconds ...
	I0924 12:21:30.495634    6036 start.go:360] acquireMachinesLock for embed-certs-768000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:32.094027    6036 start.go:364] duration metric: took 1.598310167s to acquireMachinesLock for "embed-certs-768000"
	I0924 12:21:32.094186    6036 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:21:32.094204    6036 fix.go:54] fixHost starting: 
	I0924 12:21:32.094909    6036 fix.go:112] recreateIfNeeded on embed-certs-768000: state=Stopped err=<nil>
	W0924 12:21:32.094936    6036 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:21:32.117796    6036 out.go:177] * Restarting existing qemu2 VM for "embed-certs-768000" ...
	I0924 12:21:32.124709    6036 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:32.124917    6036 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f4:2e:07:50:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/embed-certs-768000/disk.qcow2
	I0924 12:21:32.134458    6036 main.go:141] libmachine: STDOUT: 
	I0924 12:21:32.134535    6036 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:32.134637    6036 fix.go:56] duration metric: took 40.427542ms for fixHost
	I0924 12:21:32.134665    6036 start.go:83] releasing machines lock for "embed-certs-768000", held for 40.597583ms
	W0924 12:21:32.134881    6036 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-768000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-768000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:32.144707    6036 out.go:201] 
	W0924 12:21:32.150835    6036 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:32.150858    6036 out.go:270] * 
	* 
	W0924 12:21:32.152383    6036 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:21:32.164842    6036 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-768000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (52.370792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-768000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-916000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-916000 create -f testdata/busybox.yaml: exit status 1 (32.306708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-916000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-916000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (31.685375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-916000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (35.136917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-768000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (33.9855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-768000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-768000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-768000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-768000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.468042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-768000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-768000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (31.291542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-768000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-916000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-916000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-916000 describe deploy/metrics-server -n kube-system: exit status 1 (28.632042ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-916000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-916000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (32.429791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-768000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (33.169917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-768000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-768000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-768000 --alsologtostderr -v=1: exit status 83 (45.751333ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-768000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-768000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:32.439308    6072 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:32.439471    6072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:32.439475    6072 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:32.439478    6072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:32.439616    6072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:32.439843    6072 out.go:352] Setting JSON to false
	I0924 12:21:32.439852    6072 mustload.go:65] Loading cluster: embed-certs-768000
	I0924 12:21:32.440078    6072 config.go:182] Loaded profile config "embed-certs-768000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:32.444706    6072 out.go:177] * The control-plane node embed-certs-768000 host is not running: state=Stopped
	I0924 12:21:32.448574    6072 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-768000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-768000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (28.970834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-768000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (29.1385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-768000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-773000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-773000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.8521915s)

                                                
                                                
-- stdout --
	* [newest-cni-773000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-773000" primary control-plane node in "newest-cni-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:32.754752    6094 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:32.754866    6094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:32.754870    6094 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:32.754873    6094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:32.755006    6094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:32.756165    6094 out.go:352] Setting JSON to false
	I0924 12:21:32.772442    6094 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4863,"bootTime":1727200829,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:21:32.772530    6094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:21:32.775703    6094 out.go:177] * [newest-cni-773000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:21:32.782700    6094 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:21:32.782731    6094 notify.go:220] Checking for updates...
	I0924 12:21:32.788637    6094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:21:32.791685    6094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:21:32.794714    6094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:21:32.799626    6094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:21:32.802735    6094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:21:32.806123    6094 config.go:182] Loaded profile config "default-k8s-diff-port-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:32.806186    6094 config.go:182] Loaded profile config "multinode-504000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:32.806236    6094 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:21:32.810681    6094 out.go:177] * Using the qemu2 driver based on user configuration
	I0924 12:21:32.818687    6094 start.go:297] selected driver: qemu2
	I0924 12:21:32.818697    6094 start.go:901] validating driver "qemu2" against <nil>
	I0924 12:21:32.818705    6094 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:21:32.821102    6094 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0924 12:21:32.821143    6094 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0924 12:21:32.829791    6094 out.go:177] * Automatically selected the socket_vmnet network
	I0924 12:21:32.832818    6094 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0924 12:21:32.832851    6094 cni.go:84] Creating CNI manager for ""
	I0924 12:21:32.832881    6094 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:21:32.832886    6094 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 12:21:32.832922    6094 start.go:340] cluster config:
	{Name:newest-cni-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:32.836570    6094 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:32.843720    6094 out.go:177] * Starting "newest-cni-773000" primary control-plane node in "newest-cni-773000" cluster
	I0924 12:21:32.847740    6094 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:21:32.847758    6094 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:21:32.847772    6094 cache.go:56] Caching tarball of preloaded images
	I0924 12:21:32.847875    6094 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:21:32.847881    6094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:21:32.847950    6094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/newest-cni-773000/config.json ...
	I0924 12:21:32.847962    6094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/newest-cni-773000/config.json: {Name:mk12b0348b441162e6951f16cd4542e3c6999bdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 12:21:32.848223    6094 start.go:360] acquireMachinesLock for newest-cni-773000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:32.848260    6094 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "newest-cni-773000"
	I0924 12:21:32.848274    6094 start.go:93] Provisioning new machine with config: &{Name:newest-cni-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:21:32.848306    6094 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:21:32.854718    6094 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:21:32.873135    6094 start.go:159] libmachine.API.Create for "newest-cni-773000" (driver="qemu2")
	I0924 12:21:32.873164    6094 client.go:168] LocalClient.Create starting
	I0924 12:21:32.873229    6094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:21:32.873261    6094 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:32.873270    6094 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:32.873318    6094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:21:32.873342    6094 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:32.873351    6094 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:32.873692    6094 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:21:33.042039    6094 main.go:141] libmachine: Creating SSH key...
	I0924 12:21:33.103299    6094 main.go:141] libmachine: Creating Disk image...
	I0924 12:21:33.103305    6094 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:21:33.103495    6094 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2
	I0924 12:21:33.112502    6094 main.go:141] libmachine: STDOUT: 
	I0924 12:21:33.112524    6094 main.go:141] libmachine: STDERR: 
	I0924 12:21:33.112584    6094 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2 +20000M
	I0924 12:21:33.120355    6094 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:21:33.120382    6094 main.go:141] libmachine: STDERR: 
	I0924 12:21:33.120399    6094 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2
	I0924 12:21:33.120405    6094 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:21:33.120416    6094 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:33.120441    6094 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:84:52:e7:47:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2
	I0924 12:21:33.122055    6094 main.go:141] libmachine: STDOUT: 
	I0924 12:21:33.122068    6094 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:33.122091    6094 client.go:171] duration metric: took 248.918792ms to LocalClient.Create
	I0924 12:21:35.124283    6094 start.go:128] duration metric: took 2.275962542s to createHost
	I0924 12:21:35.124365    6094 start.go:83] releasing machines lock for "newest-cni-773000", held for 2.276109209s
	W0924 12:21:35.124419    6094 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:35.141908    6094 out.go:177] * Deleting "newest-cni-773000" in qemu2 ...
	W0924 12:21:35.179539    6094 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:35.179586    6094 start.go:729] Will try again in 5 seconds ...
	I0924 12:21:40.181820    6094 start.go:360] acquireMachinesLock for newest-cni-773000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:40.196148    6094 start.go:364] duration metric: took 14.239875ms to acquireMachinesLock for "newest-cni-773000"
	I0924 12:21:40.196290    6094 start.go:93] Provisioning new machine with config: &{Name:newest-cni-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 12:21:40.196492    6094 start.go:125] createHost starting for "" (driver="qemu2")
	I0924 12:21:40.204962    6094 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0924 12:21:40.254397    6094 start.go:159] libmachine.API.Create for "newest-cni-773000" (driver="qemu2")
	I0924 12:21:40.254439    6094 client.go:168] LocalClient.Create starting
	I0924 12:21:40.254570    6094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/ca.pem
	I0924 12:21:40.254641    6094 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:40.254661    6094 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:40.254766    6094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19700-1081/.minikube/certs/cert.pem
	I0924 12:21:40.254811    6094 main.go:141] libmachine: Decoding PEM data...
	I0924 12:21:40.254827    6094 main.go:141] libmachine: Parsing certificate...
	I0924 12:21:40.255347    6094 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0924 12:21:40.431947    6094 main.go:141] libmachine: Creating SSH key...
	I0924 12:21:40.522592    6094 main.go:141] libmachine: Creating Disk image...
	I0924 12:21:40.522604    6094 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0924 12:21:40.522827    6094 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2
	I0924 12:21:40.532490    6094 main.go:141] libmachine: STDOUT: 
	I0924 12:21:40.532525    6094 main.go:141] libmachine: STDERR: 
	I0924 12:21:40.532597    6094 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2 +20000M
	I0924 12:21:40.541407    6094 main.go:141] libmachine: STDOUT: Image resized.
	
	I0924 12:21:40.541430    6094 main.go:141] libmachine: STDERR: 
	I0924 12:21:40.541442    6094 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2
	I0924 12:21:40.541447    6094 main.go:141] libmachine: Starting QEMU VM...
	I0924 12:21:40.541457    6094 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:40.541480    6094 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:51:48:8a:88:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2
	I0924 12:21:40.543320    6094 main.go:141] libmachine: STDOUT: 
	I0924 12:21:40.543337    6094 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:40.543348    6094 client.go:171] duration metric: took 288.905834ms to LocalClient.Create
	I0924 12:21:42.545526    6094 start.go:128] duration metric: took 2.349011667s to createHost
	I0924 12:21:42.545611    6094 start.go:83] releasing machines lock for "newest-cni-773000", held for 2.349441375s
	W0924 12:21:42.545935    6094 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:42.554675    6094 out.go:201] 
	W0924 12:21:42.558753    6094 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:42.558876    6094 out.go:270] * 
	* 
	W0924 12:21:42.561670    6094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:21:42.569587    6094 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-773000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000: exit status 7 (68.715958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-773000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-916000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-916000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.554625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-916000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-916000" primary control-plane node in "default-k8s-diff-port-916000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:34.708591    6118 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:34.708712    6118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:34.708716    6118 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:34.708718    6118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:34.708847    6118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:34.709869    6118 out.go:352] Setting JSON to false
	I0924 12:21:34.725830    6118 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4865,"bootTime":1727200829,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:21:34.725903    6118 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:21:34.730752    6118 out.go:177] * [default-k8s-diff-port-916000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:21:34.738718    6118 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:21:34.738781    6118 notify.go:220] Checking for updates...
	I0924 12:21:34.745785    6118 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:21:34.748644    6118 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:21:34.751709    6118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:21:34.754747    6118 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:21:34.757711    6118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:21:34.761039    6118 config.go:182] Loaded profile config "default-k8s-diff-port-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:34.761306    6118 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:21:34.765745    6118 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:21:34.774737    6118 start.go:297] selected driver: qemu2
	I0924 12:21:34.774746    6118 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:34.774816    6118 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:21:34.777181    6118 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 12:21:34.777211    6118 cni.go:84] Creating CNI manager for ""
	I0924 12:21:34.777235    6118 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:21:34.777266    6118 start.go:340] cluster config:
	{Name:default-k8s-diff-port-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-916000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:34.780901    6118 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:34.787663    6118 out.go:177] * Starting "default-k8s-diff-port-916000" primary control-plane node in "default-k8s-diff-port-916000" cluster
	I0924 12:21:34.791712    6118 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:21:34.791726    6118 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:21:34.791735    6118 cache.go:56] Caching tarball of preloaded images
	I0924 12:21:34.791786    6118 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:21:34.791791    6118 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:21:34.791852    6118 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/default-k8s-diff-port-916000/config.json ...
	I0924 12:21:34.792330    6118 start.go:360] acquireMachinesLock for default-k8s-diff-port-916000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:35.124528    6118 start.go:364] duration metric: took 332.119334ms to acquireMachinesLock for "default-k8s-diff-port-916000"
	I0924 12:21:35.124603    6118 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:21:35.124637    6118 fix.go:54] fixHost starting: 
	I0924 12:21:35.125428    6118 fix.go:112] recreateIfNeeded on default-k8s-diff-port-916000: state=Stopped err=<nil>
	W0924 12:21:35.125473    6118 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:21:35.133948    6118 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-916000" ...
	I0924 12:21:35.145964    6118 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:35.146135    6118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:e3:55:8d:5f:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2
	I0924 12:21:35.158410    6118 main.go:141] libmachine: STDOUT: 
	I0924 12:21:35.158480    6118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:35.158613    6118 fix.go:56] duration metric: took 33.981583ms for fixHost
	I0924 12:21:35.158632    6118 start.go:83] releasing machines lock for "default-k8s-diff-port-916000", held for 34.070209ms
	W0924 12:21:35.158669    6118 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:35.158851    6118 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:35.158866    6118 start.go:729] Will try again in 5 seconds ...
	I0924 12:21:40.161125    6118 start.go:360] acquireMachinesLock for default-k8s-diff-port-916000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:40.161637    6118 start.go:364] duration metric: took 370.375µs to acquireMachinesLock for "default-k8s-diff-port-916000"
	I0924 12:21:40.161814    6118 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:21:40.161836    6118 fix.go:54] fixHost starting: 
	I0924 12:21:40.162658    6118 fix.go:112] recreateIfNeeded on default-k8s-diff-port-916000: state=Stopped err=<nil>
	W0924 12:21:40.162684    6118 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:21:40.182007    6118 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-916000" ...
	I0924 12:21:40.186051    6118 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:40.186317    6118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:e3:55:8d:5f:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/default-k8s-diff-port-916000/disk.qcow2
	I0924 12:21:40.195850    6118 main.go:141] libmachine: STDOUT: 
	I0924 12:21:40.195917    6118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:40.196027    6118 fix.go:56] duration metric: took 34.1915ms for fixHost
	I0924 12:21:40.196056    6118 start.go:83] releasing machines lock for "default-k8s-diff-port-916000", held for 34.370709ms
	W0924 12:21:40.196352    6118 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:40.208051    6118 out.go:201] 
	W0924 12:21:40.212150    6118 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:40.212181    6118 out.go:270] * 
	* 
	W0924 12:21:40.214755    6118 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:21:40.224982    6118 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-916000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (50.275917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-916000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (35.22625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-916000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-916000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-916000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.136333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-916000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-916000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (34.450084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-916000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (30.252708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-916000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-916000 --alsologtostderr -v=1: exit status 83 (44.49875ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-916000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:40.499035    6142 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:40.499196    6142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:40.499200    6142 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:40.499203    6142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:40.499343    6142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:40.499588    6142 out.go:352] Setting JSON to false
	I0924 12:21:40.499597    6142 mustload.go:65] Loading cluster: default-k8s-diff-port-916000
	I0924 12:21:40.499829    6142 config.go:182] Loaded profile config "default-k8s-diff-port-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:40.504060    6142 out.go:177] * The control-plane node default-k8s-diff-port-916000 host is not running: state=Stopped
	I0924 12:21:40.508030    6142 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-916000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-916000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (30.389625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-916000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (29.33575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-773000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-773000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.187359333s)

                                                
                                                
-- stdout --
	* [newest-cni-773000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-773000" primary control-plane node in "newest-cni-773000" cluster
	* Restarting existing qemu2 VM for "newest-cni-773000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-773000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:46.113026    6190 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:46.113157    6190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:46.113160    6190 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:46.113163    6190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:46.113309    6190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:46.114267    6190 out.go:352] Setting JSON to false
	I0924 12:21:46.130406    6190 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4877,"bootTime":1727200829,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 12:21:46.130470    6190 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 12:21:46.135358    6190 out.go:177] * [newest-cni-773000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 12:21:46.142301    6190 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 12:21:46.142363    6190 notify.go:220] Checking for updates...
	I0924 12:21:46.149309    6190 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 12:21:46.152321    6190 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 12:21:46.155314    6190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 12:21:46.158258    6190 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 12:21:46.161282    6190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 12:21:46.164545    6190 config.go:182] Loaded profile config "newest-cni-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:46.164825    6190 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 12:21:46.169197    6190 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 12:21:46.176279    6190 start.go:297] selected driver: qemu2
	I0924 12:21:46.176289    6190 start.go:901] validating driver "qemu2" against &{Name:newest-cni-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:46.176346    6190 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 12:21:46.178805    6190 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0924 12:21:46.178832    6190 cni.go:84] Creating CNI manager for ""
	I0924 12:21:46.178853    6190 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 12:21:46.178881    6190 start.go:340] cluster config:
	{Name:newest-cni-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-773000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 12:21:46.182634    6190 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 12:21:46.191297    6190 out.go:177] * Starting "newest-cni-773000" primary control-plane node in "newest-cni-773000" cluster
	I0924 12:21:46.195260    6190 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 12:21:46.195275    6190 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 12:21:46.195283    6190 cache.go:56] Caching tarball of preloaded images
	I0924 12:21:46.195367    6190 preload.go:172] Found /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 12:21:46.195373    6190 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0924 12:21:46.195439    6190 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/newest-cni-773000/config.json ...
	I0924 12:21:46.195875    6190 start.go:360] acquireMachinesLock for newest-cni-773000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:46.195913    6190 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "newest-cni-773000"
	I0924 12:21:46.195923    6190 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:21:46.195928    6190 fix.go:54] fixHost starting: 
	I0924 12:21:46.196040    6190 fix.go:112] recreateIfNeeded on newest-cni-773000: state=Stopped err=<nil>
	W0924 12:21:46.196050    6190 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:21:46.200304    6190 out.go:177] * Restarting existing qemu2 VM for "newest-cni-773000" ...
	I0924 12:21:46.208294    6190 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:46.208325    6190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:51:48:8a:88:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2
	I0924 12:21:46.210264    6190 main.go:141] libmachine: STDOUT: 
	I0924 12:21:46.210291    6190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:46.210321    6190 fix.go:56] duration metric: took 14.391834ms for fixHost
	I0924 12:21:46.210326    6190 start.go:83] releasing machines lock for "newest-cni-773000", held for 14.409125ms
	W0924 12:21:46.210335    6190 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:46.210381    6190 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:46.210386    6190 start.go:729] Will try again in 5 seconds ...
	I0924 12:21:51.212604    6190 start.go:360] acquireMachinesLock for newest-cni-773000: {Name:mkc2052d46bbe0db4289f22c8a6866566e41ad15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 12:21:51.213028    6190 start.go:364] duration metric: took 297.958µs to acquireMachinesLock for "newest-cni-773000"
	I0924 12:21:51.213150    6190 start.go:96] Skipping create...Using existing machine configuration
	I0924 12:21:51.213172    6190 fix.go:54] fixHost starting: 
	I0924 12:21:51.213915    6190 fix.go:112] recreateIfNeeded on newest-cni-773000: state=Stopped err=<nil>
	W0924 12:21:51.213942    6190 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 12:21:51.223265    6190 out.go:177] * Restarting existing qemu2 VM for "newest-cni-773000" ...
	I0924 12:21:51.226208    6190 qemu.go:418] Using hvf for hardware acceleration
	I0924 12:21:51.226449    6190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:51:48:8a:88:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19700-1081/.minikube/machines/newest-cni-773000/disk.qcow2
	I0924 12:21:51.235660    6190 main.go:141] libmachine: STDOUT: 
	I0924 12:21:51.235739    6190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0924 12:21:51.235837    6190 fix.go:56] duration metric: took 22.663666ms for fixHost
	I0924 12:21:51.235860    6190 start.go:83] releasing machines lock for "newest-cni-773000", held for 22.809958ms
	W0924 12:21:51.236074    6190 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-773000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-773000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0924 12:21:51.243141    6190 out.go:201] 
	W0924 12:21:51.247099    6190 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0924 12:21:51.247145    6190 out.go:270] * 
	* 
	W0924 12:21:51.249969    6190 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 12:21:51.258133    6190 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-773000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000: exit status 7 (69.395875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-773000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-773000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000: exit status 7 (30.331709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-773000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-773000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-773000 --alsologtostderr -v=1: exit status 83 (41.268084ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-773000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-773000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 12:21:51.443396    6208 out.go:345] Setting OutFile to fd 1 ...
	I0924 12:21:51.443595    6208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:51.443598    6208 out.go:358] Setting ErrFile to fd 2...
	I0924 12:21:51.443600    6208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 12:21:51.443733    6208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 12:21:51.443960    6208 out.go:352] Setting JSON to false
	I0924 12:21:51.443969    6208 mustload.go:65] Loading cluster: newest-cni-773000
	I0924 12:21:51.444208    6208 config.go:182] Loaded profile config "newest-cni-773000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 12:21:51.448592    6208 out.go:177] * The control-plane node newest-cni-773000 host is not running: state=Stopped
	I0924 12:21:51.452441    6208 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-773000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-773000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000: exit status 7 (30.899334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-773000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000: exit status 7 (30.324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-773000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 8.34
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 197.83
29 TestAddons/serial/Volcano 38.78
31 TestAddons/serial/GCPAuth/Namespaces 0.07
34 TestAddons/parallel/Ingress 17.56
35 TestAddons/parallel/InspektorGadget 10.32
36 TestAddons/parallel/MetricsServer 6.28
38 TestAddons/parallel/CSI 51.57
39 TestAddons/parallel/Headlamp 18.66
40 TestAddons/parallel/CloudSpanner 5.2
41 TestAddons/parallel/LocalPath 40.95
42 TestAddons/parallel/NvidiaDevicePlugin 6.18
43 TestAddons/parallel/Yakd 10.27
44 TestAddons/StoppedEnableDisable 12.4
52 TestHyperKitDriverInstallOrUpdate 11.67
55 TestErrorSpam/setup 33.96
56 TestErrorSpam/start 0.35
57 TestErrorSpam/status 0.26
58 TestErrorSpam/pause 0.69
59 TestErrorSpam/unpause 0.64
60 TestErrorSpam/stop 64.3
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 51.9
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 36.08
67 TestFunctional/serial/KubeContext 0.03
68 TestFunctional/serial/KubectlGetPods 0.04
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.93
72 TestFunctional/serial/CacheCmd/cache/add_local 1.17
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.03
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
76 TestFunctional/serial/CacheCmd/cache/cache_reload 0.68
77 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/serial/MinikubeKubectlCmd 2.02
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
80 TestFunctional/serial/ExtraConfig 37.76
81 TestFunctional/serial/ComponentHealth 0.04
82 TestFunctional/serial/LogsCmd 0.64
83 TestFunctional/serial/LogsFileCmd 0.64
84 TestFunctional/serial/InvalidService 3.89
86 TestFunctional/parallel/ConfigCmd 0.22
87 TestFunctional/parallel/DashboardCmd 8.08
88 TestFunctional/parallel/DryRun 0.22
89 TestFunctional/parallel/InternationalLanguage 0.11
90 TestFunctional/parallel/StatusCmd 0.25
95 TestFunctional/parallel/AddonsCmd 0.1
96 TestFunctional/parallel/PersistentVolumeClaim 25.08
98 TestFunctional/parallel/SSHCmd 0.13
99 TestFunctional/parallel/CpCmd 0.42
101 TestFunctional/parallel/FileSync 0.07
102 TestFunctional/parallel/CertSync 0.41
106 TestFunctional/parallel/NodeLabels 0.05
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
110 TestFunctional/parallel/License 0.24
111 TestFunctional/parallel/Version/short 0.05
112 TestFunctional/parallel/Version/components 0.16
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
117 TestFunctional/parallel/ImageCommands/ImageBuild 1.85
118 TestFunctional/parallel/ImageCommands/Setup 1.84
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.23
122 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
123 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
124 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
125 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
126 TestFunctional/parallel/DockerEnv/bash 0.27
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.11
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
141 TestFunctional/parallel/MountCmd/any-port 5.48
142 TestFunctional/parallel/MountCmd/specific-port 1.18
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.07
144 TestFunctional/parallel/ServiceCmd/DeployApp 9.1
145 TestFunctional/parallel/ServiceCmd/List 0.31
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
148 TestFunctional/parallel/ServiceCmd/Format 0.1
149 TestFunctional/parallel/ServiceCmd/URL 0.1
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
151 TestFunctional/parallel/ProfileCmd/profile_list 0.13
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 181.32
160 TestMultiControlPlane/serial/DeployApp 5.42
161 TestMultiControlPlane/serial/PingHostFromPods 0.76
162 TestMultiControlPlane/serial/AddWorkerNode 53.7
163 TestMultiControlPlane/serial/NodeLabels 0.14
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.31
165 TestMultiControlPlane/serial/CopyFile 4.1
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 75.05
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 2.98
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.03
258 TestStoppedBinaryUpgrade/Setup 1.52
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 31.31
276 TestNoKubernetes/serial/Stop 3.13
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
293 TestStartStop/group/old-k8s-version/serial/Stop 3.44
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
306 TestStartStop/group/no-preload/serial/Stop 2.06
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
315 TestStartStop/group/embed-certs/serial/Stop 3.39
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.13
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.05
337 TestStartStop/group/newest-cni/serial/Stop 3.25
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0924 11:19:03.768027    1598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0924 11:19:03.768502    1598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-823000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-823000: exit status 85 (94.233917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-823000 | jenkins | v1.34.0 | 24 Sep 24 11:18 PDT |          |
	|         | -p download-only-823000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 11:18:44
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 11:18:44.101896    1599 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:18:44.102047    1599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:18:44.102051    1599 out.go:358] Setting ErrFile to fd 2...
	I0924 11:18:44.102053    1599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:18:44.102182    1599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	W0924 11:18:44.102266    1599 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19700-1081/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19700-1081/.minikube/config/config.json: no such file or directory
	I0924 11:18:44.103511    1599 out.go:352] Setting JSON to true
	I0924 11:18:44.121049    1599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1095,"bootTime":1727200829,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 11:18:44.121110    1599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 11:18:44.127342    1599 out.go:97] [download-only-823000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 11:18:44.127502    1599 notify.go:220] Checking for updates...
	W0924 11:18:44.127572    1599 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 11:18:44.131312    1599 out.go:169] MINIKUBE_LOCATION=19700
	I0924 11:18:44.134479    1599 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 11:18:44.138367    1599 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 11:18:44.141420    1599 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 11:18:44.144289    1599 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	W0924 11:18:44.150297    1599 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 11:18:44.150537    1599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 11:18:44.155252    1599 out.go:97] Using the qemu2 driver based on user configuration
	I0924 11:18:44.155270    1599 start.go:297] selected driver: qemu2
	I0924 11:18:44.155283    1599 start.go:901] validating driver "qemu2" against <nil>
	I0924 11:18:44.155354    1599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 11:18:44.158281    1599 out.go:169] Automatically selected the socket_vmnet network
	I0924 11:18:44.163803    1599 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0924 11:18:44.163928    1599 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 11:18:44.163981    1599 cni.go:84] Creating CNI manager for ""
	I0924 11:18:44.164019    1599 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0924 11:18:44.164070    1599 start.go:340] cluster config:
	{Name:download-only-823000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-823000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:18:44.169433    1599 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 11:18:44.172358    1599 out.go:97] Downloading VM boot image ...
	I0924 11:18:44.172380    1599 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0924 11:18:52.176743    1599 out.go:97] Starting "download-only-823000" primary control-plane node in "download-only-823000" cluster
	I0924 11:18:52.176762    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 11:18:52.231505    1599 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0924 11:18:52.231511    1599 cache.go:56] Caching tarball of preloaded images
	I0924 11:18:52.231707    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 11:18:52.235096    1599 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0924 11:18:52.235103    1599 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0924 11:18:52.326372    1599 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0924 11:19:02.441698    1599 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0924 11:19:02.441878    1599 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0924 11:19:03.138846    1599 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0924 11:19:03.139055    1599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/download-only-823000/config.json ...
	I0924 11:19:03.139075    1599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/download-only-823000/config.json: {Name:mkea315355728d670f6c8314367e1e532a813e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 11:19:03.139354    1599 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0924 11:19:03.139561    1599 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0924 11:19:03.722691    1599 out.go:193] 
	W0924 11:19:03.728968    1599 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19700-1081/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0 0x1068996c0] Decompressors:map[bz2:0x14000803730 gz:0x14000803738 tar:0x140008036e0 tar.bz2:0x140008036f0 tar.gz:0x14000803700 tar.xz:0x14000803710 tar.zst:0x14000803720 tbz2:0x140008036f0 tgz:0x14000803700 txz:0x14000803710 tzst:0x14000803720 xz:0x14000803740 zip:0x14000803750 zst:0x14000803748] Getters:map[file:0x1400078cac0 http:0x1400071e2d0 https:0x1400071e320] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0924 11:19:03.728997    1599 out_reason.go:110] 
	W0924 11:19:03.736666    1599 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 11:19:03.739788    1599 out.go:193] 
	
	
	* The control-plane node download-only-823000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-823000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-823000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-295000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-295000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (8.342444916s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0924 11:19:12.459961    1598 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0924 11:19:12.460017    1598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-295000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-295000: exit status 85 (75.683917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-823000 | jenkins | v1.34.0 | 24 Sep 24 11:18 PDT |                     |
	|         | -p download-only-823000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT | 24 Sep 24 11:19 PDT |
	| delete  | -p download-only-823000        | download-only-823000 | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT | 24 Sep 24 11:19 PDT |
	| start   | -o=json --download-only        | download-only-295000 | jenkins | v1.34.0 | 24 Sep 24 11:19 PDT |                     |
	|         | -p download-only-295000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 11:19:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 11:19:04.144720    1628 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:19:04.144853    1628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:19:04.144859    1628 out.go:358] Setting ErrFile to fd 2...
	I0924 11:19:04.144862    1628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:19:04.144992    1628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 11:19:04.146051    1628 out.go:352] Setting JSON to true
	I0924 11:19:04.161972    1628 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1115,"bootTime":1727200829,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 11:19:04.162037    1628 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 11:19:04.165541    1628 out.go:97] [download-only-295000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 11:19:04.165640    1628 notify.go:220] Checking for updates...
	I0924 11:19:04.168394    1628 out.go:169] MINIKUBE_LOCATION=19700
	I0924 11:19:04.171433    1628 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 11:19:04.175406    1628 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 11:19:04.178339    1628 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 11:19:04.181445    1628 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	W0924 11:19:04.187380    1628 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 11:19:04.187534    1628 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 11:19:04.190449    1628 out.go:97] Using the qemu2 driver based on user configuration
	I0924 11:19:04.190458    1628 start.go:297] selected driver: qemu2
	I0924 11:19:04.190462    1628 start.go:901] validating driver "qemu2" against <nil>
	I0924 11:19:04.190509    1628 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 11:19:04.193397    1628 out.go:169] Automatically selected the socket_vmnet network
	I0924 11:19:04.196802    1628 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0924 11:19:04.196909    1628 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 11:19:04.196928    1628 cni.go:84] Creating CNI manager for ""
	I0924 11:19:04.196960    1628 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 11:19:04.196967    1628 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 11:19:04.197017    1628 start.go:340] cluster config:
	{Name:download-only-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:19:04.200397    1628 iso.go:125] acquiring lock: {Name:mk8b445de8a14c5616f84f7d3451cc6b14140f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 11:19:04.203440    1628 out.go:97] Starting "download-only-295000" primary control-plane node in "download-only-295000" cluster
	I0924 11:19:04.203449    1628 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 11:19:04.256315    1628 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0924 11:19:04.256334    1628 cache.go:56] Caching tarball of preloaded images
	I0924 11:19:04.256498    1628 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 11:19:04.262128    1628 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0924 11:19:04.262135    1628 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0924 11:19:04.346027    1628 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19700-1081/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-295000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-295000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-295000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-472000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-472000: exit status 85 (59.206625ms)

                                                
                                                
-- stdout --
	* Profile "addons-472000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-472000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-472000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-472000: exit status 85 (55.342917ms)

                                                
                                                
-- stdout --
	* Profile "addons-472000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-472000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (197.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-472000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-472000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m17.827196334s)
--- PASS: TestAddons/Setup (197.83s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.78s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 7.25925ms
addons_test.go:843: volcano-admission stabilized in 7.44325ms
addons_test.go:835: volcano-scheduler stabilized in 7.59725ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-zl4dj" [4adae003-fa52-4313-ba26-b4d121424dc5] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005175458s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-v8gwt" [02273680-beb8-4b5b-928d-20e35c0462ce] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004596167s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-cs76b" [0b0d3b05-6b8d-46d9-abfd-fb62df383a4e] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005347708s
addons_test.go:870: (dbg) Run:  kubectl --context addons-472000 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-472000 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-472000 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [8f75e463-4109-49b9-bf76-2236eabe2dad] Pending
helpers_test.go:344: "test-job-nginx-0" [8f75e463-4109-49b9-bf76-2236eabe2dad] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [8f75e463-4109-49b9-bf76-2236eabe2dad] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.008420083s
addons_test.go:906: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-darwin-arm64 -p addons-472000 addons disable volcano --alsologtostderr -v=1: (10.5369965s)
--- PASS: TestAddons/serial/Volcano (38.78s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-472000 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-472000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-472000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-472000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-472000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e3f64318-5af4-471d-9200-9c7218eb08fc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e3f64318-5af4-471d-9200-9c7218eb08fc] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005856166s
I0924 11:32:53.222526    1598 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-472000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-darwin-arm64 -p addons-472000 addons disable ingress --alsologtostderr -v=1: (7.227585292s)
--- PASS: TestAddons/parallel/Ingress (17.56s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-f4td4" [26314592-c8df-425c-ac98-3ec30d5edc99] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010244042s
addons_test.go:789: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-472000
addons_test.go:789: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-472000: (5.30516875s)
--- PASS: TestAddons/parallel/InspektorGadget (10.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.211625ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-x8kns" [6d3ad84c-14ea-4b11-9020-775ab4a507de] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.009880209s
addons_test.go:413: (dbg) Run:  kubectl --context addons-472000 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0924 11:32:27.824374    1598 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0924 11:32:27.827101    1598 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0924 11:32:27.827113    1598 kapi.go:107] duration metric: took 2.778417ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 2.783708ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-472000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-472000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [37654d2e-1f56-46b9-8e8a-9af4db5a2a41] Pending
helpers_test.go:344: "task-pv-pod" [37654d2e-1f56-46b9-8e8a-9af4db5a2a41] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [37654d2e-1f56-46b9-8e8a-9af4db5a2a41] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.009737916s
addons_test.go:528: (dbg) Run:  kubectl --context addons-472000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-472000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-472000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-472000 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-472000 delete pod task-pv-pod: (1.006449917s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-472000 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-472000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-472000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [aff33a62-6186-444e-bd91-2cc61479bbe0] Pending
helpers_test.go:344: "task-pv-pod-restore" [aff33a62-6186-444e-bd91-2cc61479bbe0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [aff33a62-6186-444e-bd91-2cc61479bbe0] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.006915334s
addons_test.go:570: (dbg) Run:  kubectl --context addons-472000 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-472000 delete pod task-pv-pod-restore: (1.022219584s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-472000 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-472000 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-darwin-arm64 -p addons-472000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.135475291s)
addons_test.go:586: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-472000 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-b8szs" [45a4eb5f-0537-4297-be70-4f0b6f5acd23] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-b8szs" [45a4eb5f-0537-4297-be70-4f0b6f5acd23] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.008419541s
addons_test.go:777: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-darwin-arm64 -p addons-472000 addons disable headlamp --alsologtostderr -v=1: (5.312100833s)
--- PASS: TestAddons/parallel/Headlamp (18.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-9flrf" [1b1d8122-9bef-4824-9078-8fd38b06487d] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004211541s
addons_test.go:808: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-472000
--- PASS: TestAddons/parallel/CloudSpanner (5.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-472000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-472000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-472000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c3fde90d-0d26-47ff-bfa2-24d014378c8b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c3fde90d-0d26-47ff-bfa2-24d014378c8b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c3fde90d-0d26-47ff-bfa2-24d014378c8b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00482075s
addons_test.go:938: (dbg) Run:  kubectl --context addons-472000 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 ssh "cat /opt/local-path-provisioner/pvc-5d9acef8-c72c-4cb0-b678-9b4ebcfd0da9_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-472000 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-472000 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-darwin-arm64 -p addons-472000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.445139167s)
--- PASS: TestAddons/parallel/LocalPath (40.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8mc94" [4abf47e9-f66b-4179-821a-ab27378421bd] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005710708s
addons_test.go:1002: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-472000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-zndhv" [8a389894-de7e-4d85-be2e-9575f38fcec4] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.008230791s
addons_test.go:1014: (dbg) Run:  out/minikube-darwin-arm64 -p addons-472000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-darwin-arm64 -p addons-472000 addons disable yakd --alsologtostderr -v=1: (5.257061167s)
--- PASS: TestAddons/parallel/Yakd (10.27s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-472000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-472000: (12.208912125s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-472000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-472000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-472000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.67s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0924 12:07:07.654669    1598 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0924 12:07:07.654875    1598 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0924 12:07:10.172214    1598 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0924 12:07:10.172461    1598 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0924 12:07:10.172513    1598 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/001/docker-machine-driver-hyperkit
I0924 12:07:10.692618    1598 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x104772d40 0x104772d40 0x104772d40 0x104772d40 0x104772d40 0x104772d40 0x104772d40] Decompressors:map[bz2:0x14000715bb0 gz:0x14000715bb8 tar:0x14000715b60 tar.bz2:0x14000715b70 tar.gz:0x14000715b80 tar.xz:0x14000715b90 tar.zst:0x14000715ba0 tbz2:0x14000715b70 tgz:0x14000715b80 txz:0x14000715b90 tzst:0x14000715ba0 xz:0x14000715bc0 zip:0x14000715bd0 zst:0x14000715bc8] Getters:map[file:0x140018d8030 http:0x1400052aa50 https:0x1400052aaf0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0924 12:07:10.692665    1598 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1636673057/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.67s)

                                                
                                    
x
+
TestErrorSpam/setup (33.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-209000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-209000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 --driver=qemu2 : (33.957835417s)
--- PASS: TestErrorSpam/setup (33.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 unpause
--- PASS: TestErrorSpam/unpause (0.64s)

                                                
                                    
x
+
TestErrorSpam/stop (64.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 stop: (12.209567917s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 stop: (26.059169042s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-209000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-209000 stop: (26.031451708s)
--- PASS: TestErrorSpam/stop (64.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19700-1081/.minikube/files/etc/test/nested/copy/1598/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-313000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-313000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (51.902845375s)
--- PASS: TestFunctional/serial/StartWithProxy (51.90s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0924 11:36:04.610781    1598 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-313000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-313000 --alsologtostderr -v=8: (36.075926292s)
functional_test.go:663: soft start took 36.076422792s for "functional-313000" cluster.
I0924 11:36:40.686817    1598 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (36.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-313000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-313000 cache add registry.k8s.io/pause:3.1: (1.267879875s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1994947612/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 cache add minikube-local-cache-test:functional-313000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 cache delete minikube-local-cache-test:functional-313000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-313000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-313000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (70.531292ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 kubectl -- --context functional-313000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-313000 kubectl -- --context functional-313000 get pods: (2.018867125s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.02s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-313000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-313000 get pods: (1.00452025s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-313000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-313000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.761328833s)
functional_test.go:761: restart took 37.761429416s for "functional-313000" cluster.
I0924 11:37:26.543068    1598 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (37.76s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-313000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd175805465/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-313000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-313000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-313000: exit status 115 (153.528ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31931 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-313000 delete -f testdata/invalidsvc.yaml
E0924 11:37:31.163604    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:37:31.171037    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:37:31.184350    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:37:31.207211    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:37:31.250607    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:37:31.333945    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:37:31.497288    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/InvalidService (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-313000 config get cpus: exit status 14 (35.188084ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-313000 config get cpus: exit status 14 (30.057ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-313000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-313000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2793: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-313000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
E0924 11:38:12.162284    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-313000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.734666ms)

                                                
                                                
-- stdout --
	* [functional-313000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 11:38:12.115488    2780 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:38:12.115646    2780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:38:12.115650    2780 out.go:358] Setting ErrFile to fd 2...
	I0924 11:38:12.115652    2780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:38:12.115786    2780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 11:38:12.116807    2780 out.go:352] Setting JSON to false
	I0924 11:38:12.133365    2780 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2263,"bootTime":1727200829,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 11:38:12.133433    2780 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 11:38:12.138124    2780 out.go:177] * [functional-313000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0924 11:38:12.142989    2780 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 11:38:12.143019    2780 notify.go:220] Checking for updates...
	I0924 11:38:12.148341    2780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 11:38:12.151081    2780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 11:38:12.154033    2780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 11:38:12.157081    2780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 11:38:12.160001    2780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 11:38:12.163293    2780 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 11:38:12.163559    2780 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 11:38:12.168014    2780 out.go:177] * Using the qemu2 driver based on existing profile
	I0924 11:38:12.175000    2780 start.go:297] selected driver: qemu2
	I0924 11:38:12.175007    2780 start.go:901] validating driver "qemu2" against &{Name:functional-313000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-313000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:38:12.175058    2780 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 11:38:12.179992    2780 out.go:201] 
	W0924 11:38:12.184004    2780 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0924 11:38:12.188037    2780 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-313000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-313000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-313000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.716ms)

                                                
                                                
-- stdout --
	* [functional-313000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 11:38:11.745665    2770 out.go:345] Setting OutFile to fd 1 ...
	I0924 11:38:11.745773    2770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:38:11.745776    2770 out.go:358] Setting ErrFile to fd 2...
	I0924 11:38:11.745779    2770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 11:38:11.745913    2770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
	I0924 11:38:11.747389    2770 out.go:352] Setting JSON to false
	I0924 11:38:11.765471    2770 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2262,"bootTime":1727200829,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0924 11:38:11.765599    2770 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0924 11:38:11.769750    2770 out.go:177] * [functional-313000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0924 11:38:11.777763    2770 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 11:38:11.777801    2770 notify.go:220] Checking for updates...
	I0924 11:38:11.785665    2770 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	I0924 11:38:11.788750    2770 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0924 11:38:11.791683    2770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 11:38:11.794722    2770 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	I0924 11:38:11.797778    2770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 11:38:11.800877    2770 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 11:38:11.801121    2770 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 11:38:11.805717    2770 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0924 11:38:11.811663    2770 start.go:297] selected driver: qemu2
	I0924 11:38:11.811669    2770 start.go:901] validating driver "qemu2" against &{Name:functional-313000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-313000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 11:38:11.811708    2770 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 11:38:11.817679    2770 out.go:201] 
	W0924 11:38:11.821695    2770 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0924 11:38:11.825671    2770 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b1e00d2c-0136-425a-8156-a2568864723d] Running
E0924 11:37:32.460603    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:37:33.749643    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002804875s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-313000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-313000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-313000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-313000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5f1d6900-b0d1-4f89-a0ce-11fa001fe89c] Pending
helpers_test.go:344: "sp-pod" [5f1d6900-b0d1-4f89-a0ce-11fa001fe89c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0924 11:37:41.436579    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [5f1d6900-b0d1-4f89-a0ce-11fa001fe89c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.01070125s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-313000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-313000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-313000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5e5f590d-1ed9-4f32-bd0e-6c533067ef29] Pending
helpers_test.go:344: "sp-pod" [5e5f590d-1ed9-4f32-bd0e-6c533067ef29] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0924 11:37:51.680371    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [5e5f590d-1ed9-4f32-bd0e-6c533067ef29] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008561458s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-313000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh -n functional-313000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 cp functional-313000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd105299523/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh -n functional-313000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh -n functional-313000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1598/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "sudo cat /etc/test/nested/copy/1598/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1598.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "sudo cat /etc/ssl/certs/1598.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1598.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "sudo cat /usr/share/ca-certificates/1598.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "sudo cat /etc/ssl/certs/15982.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "sudo cat /usr/share/ca-certificates/15982.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-313000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-313000 ssh "sudo systemctl is-active crio": exit status 1 (65.6535ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
E0924 11:37:31.818996    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-313000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-313000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-313000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-313000 image ls --format short --alsologtostderr:
I0924 11:38:20.627153    2799 out.go:345] Setting OutFile to fd 1 ...
I0924 11:38:20.627549    2799 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:38:20.627552    2799 out.go:358] Setting ErrFile to fd 2...
I0924 11:38:20.627555    2799 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:38:20.627715    2799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
I0924 11:38:20.628152    2799 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 11:38:20.628212    2799 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 11:38:20.629014    2799 ssh_runner.go:195] Run: systemctl --version
I0924 11:38:20.629021    2799 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/functional-313000/id_rsa Username:docker}
I0924 11:38:20.658392    2799 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-313000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| localhost/my-image                          | functional-313000 | b9aadb560619c | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-313000 | cc8e4ffdb63d9 | 30B    |
| docker.io/kicbase/echo-server               | functional-313000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-313000 image ls --format table --alsologtostderr:
I0924 11:38:22.697840    2811 out.go:345] Setting OutFile to fd 1 ...
I0924 11:38:22.698001    2811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:38:22.698004    2811 out.go:358] Setting ErrFile to fd 2...
I0924 11:38:22.698007    2811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:38:22.698146    2811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
I0924 11:38:22.698568    2811 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 11:38:22.698634    2811 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 11:38:22.699435    2811 ssh_runner.go:195] Run: systemctl --version
I0924 11:38:22.699448    2811 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/functional-313000/id_rsa Username:docker}
I0924 11:38:22.727980    2811 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-313000 image ls --format json --alsologtostderr:
[{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"b9aadb560619c022f9d7e76f7546dddba198ee8ea6f6a9c92317858fcbd671c6","repoDigests":[],"repoTags":["localhost/my-image:functional-313000"],"size":"1410000"},{"id":"cc8e4ffdb63d9193a1d6507a1f6039653011ecf77ce5e7d94f0030e965514078","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-313000"],"size":"30"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1
dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-313000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf
38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-g
libc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-313000 image ls --format json --alsologtostderr:
I0924 11:38:22.624340    2809 out.go:345] Setting OutFile to fd 1 ...
I0924 11:38:22.624489    2809 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:38:22.624492    2809 out.go:358] Setting ErrFile to fd 2...
I0924 11:38:22.624494    2809 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:38:22.624630    2809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
I0924 11:38:22.625084    2809 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 11:38:22.625144    2809 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 11:38:22.625981    2809 ssh_runner.go:195] Run: systemctl --version
I0924 11:38:22.625988    2809 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/functional-313000/id_rsa Username:docker}
I0924 11:38:22.654713    2809 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-313000 image ls --format yaml --alsologtostderr:
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-313000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: cc8e4ffdb63d9193a1d6507a1f6039653011ecf77ce5e7d94f0030e965514078
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-313000
size: "30"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-313000 image ls --format yaml --alsologtostderr:
I0924 11:38:20.702174    2801 out.go:345] Setting OutFile to fd 1 ...
I0924 11:38:20.702323    2801 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:38:20.702326    2801 out.go:358] Setting ErrFile to fd 2...
I0924 11:38:20.702328    2801 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:38:20.702466    2801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
I0924 11:38:20.702957    2801 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 11:38:20.703024    2801 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 11:38:20.703853    2801 ssh_runner.go:195] Run: systemctl --version
I0924 11:38:20.703861    2801 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/functional-313000/id_rsa Username:docker}
I0924 11:38:20.732279    2801 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-313000 ssh pgrep buildkitd: exit status 1 (63.460458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image build -t localhost/my-image:functional-313000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-313000 image build -t localhost/my-image:functional-313000 testdata/build --alsologtostderr: (1.710537959s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-313000 image build -t localhost/my-image:functional-313000 testdata/build --alsologtostderr:
I0924 11:38:20.840607    2805 out.go:345] Setting OutFile to fd 1 ...
I0924 11:38:20.840847    2805 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:38:20.840850    2805 out.go:358] Setting ErrFile to fd 2...
I0924 11:38:20.840853    2805 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 11:38:20.840983    2805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19700-1081/.minikube/bin
I0924 11:38:20.841420    2805 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 11:38:20.842288    2805 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0924 11:38:20.843154    2805 ssh_runner.go:195] Run: systemctl --version
I0924 11:38:20.843167    2805 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19700-1081/.minikube/machines/functional-313000/id_rsa Username:docker}
I0924 11:38:20.870951    2805 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1166938359.tar
I0924 11:38:20.871002    2805 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0924 11:38:20.874513    2805 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1166938359.tar
I0924 11:38:20.876024    2805 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1166938359.tar: stat -c "%s %y" /var/lib/minikube/build/build.1166938359.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1166938359.tar': No such file or directory
I0924 11:38:20.876038    2805 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1166938359.tar --> /var/lib/minikube/build/build.1166938359.tar (3072 bytes)
I0924 11:38:20.884723    2805 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1166938359
I0924 11:38:20.887989    2805 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1166938359 -xf /var/lib/minikube/build/build.1166938359.tar
I0924 11:38:20.891337    2805 docker.go:360] Building image: /var/lib/minikube/build/build.1166938359
I0924 11:38:20.891400    2805 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-313000 /var/lib/minikube/build/build.1166938359
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:b9aadb560619c022f9d7e76f7546dddba198ee8ea6f6a9c92317858fcbd671c6 done
#8 naming to localhost/my-image:functional-313000 done
#8 DONE 0.0s
I0924 11:38:22.507063    2805 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-313000 /var/lib/minikube/build/build.1166938359: (1.6156575s)
I0924 11:38:22.507128    2805 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1166938359
I0924 11:38:22.510730    2805 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1166938359.tar
I0924 11:38:22.514068    2805 build_images.go:217] Built localhost/my-image:functional-313000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1166938359.tar
I0924 11:38:22.514085    2805 build_images.go:133] succeeded building to: functional-313000
I0924 11:38:22.514088    2805 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.827224834s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-313000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image load --daemon kicbase/echo-server:functional-313000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image load --daemon kicbase/echo-server:functional-313000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-313000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image load --daemon kicbase/echo-server:functional-313000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image save kicbase/echo-server:functional-313000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image rm kicbase/echo-server:functional-313000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
E0924 11:37:36.313170    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-313000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 image save --daemon kicbase/echo-server:functional-313000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-313000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-313000 docker-env) && out/minikube-darwin-arm64 status -p functional-313000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-313000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-313000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-313000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-313000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-313000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2636: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-313000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-313000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7d26c8e5-e9c4-4d31-9dbe-541973c1b0b7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7d26c8e5-e9c4-4d31-9dbe-541973c1b0b7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.010339333s
I0924 11:37:52.286426    1598 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-313000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.100.238 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0924 11:37:52.380134    1598 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0924 11:37:52.419020    1598 config.go:182] Loaded profile config "functional-313000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-313000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3785911569/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727203072545580000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3785911569/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727203072545580000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3785911569/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727203072545580000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3785911569/001/test-1727203072545580000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (69.786875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 11:37:52.616251    1598 retry.go:31] will retry after 733.758253ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 24 18:37 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 24 18:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 24 18:37 test-1727203072545580000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh cat /mount-9p/test-1727203072545580000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-313000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [41785ba0-8715-4391-82c8-f05f3d9ca902] Pending
helpers_test.go:344: "busybox-mount" [41785ba0-8715-4391-82c8-f05f3d9ca902] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [41785ba0-8715-4391-82c8-f05f3d9ca902] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [41785ba0-8715-4391-82c8-f05f3d9ca902] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003994917s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-313000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3785911569/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1306183488/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1306183488/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-313000 ssh "sudo umount -f /mount-9p": exit status 1 (65.624458ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-313000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1306183488/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1517033498/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1517033498/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1517033498/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T" /mount1: exit status 1 (82.5615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 11:37:59.294278    1598 retry.go:31] will retry after 475.11576ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T" /mount3: exit status 1 (62.706583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 11:37:59.969497    1598 retry.go:31] will retry after 1.060892572s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-313000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1517033498/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1517033498/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-313000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1517033498/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-313000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-313000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-tnb87" [ce3378d0-66e5-4ae6-94cf-7af0fae9dd9e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-tnb87" [ce3378d0-66e5-4ae6-94cf-7af0fae9dd9e] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.012646417s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 service list -o json
functional_test.go:1494: Took "302.144792ms" to run "out/minikube-darwin-arm64 -p functional-313000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32694
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-313000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32694
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "100.062291ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.533084ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "99.363167ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "35.43925ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-313000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-313000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-313000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (181.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-978000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0924 11:38:53.125446    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:40:15.048733    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-978000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m1.130725542s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (181.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-978000 -- rollout status deployment/busybox: (3.933066834s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-6fcdw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-jh9f2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-z7mc8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-6fcdw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-jh9f2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-z7mc8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-6fcdw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-jh9f2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-z7mc8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-6fcdw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-6fcdw -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-jh9f2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-jh9f2 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-z7mc8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-978000 -- exec busybox-7dff88458-z7mc8 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-978000 -v=7 --alsologtostderr
E0924 11:42:31.162523    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/addons-472000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:32.047387    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:32.055008    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:32.067233    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:32.090619    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:32.133945    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:32.217308    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:32.380693    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:32.704049    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:33.347029    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:34.630556    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
E0924 11:42:37.194061    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-978000 -v=7 --alsologtostderr: (53.490190959s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-978000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp testdata/cp-test.txt ha-978000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3137899585/001/cp-test_ha-978000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000:/home/docker/cp-test.txt ha-978000-m02:/home/docker/cp-test_ha-978000_ha-978000-m02.txt
E0924 11:42:42.315657    1598 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19700-1081/.minikube/profiles/functional-313000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m02 "sudo cat /home/docker/cp-test_ha-978000_ha-978000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000:/home/docker/cp-test.txt ha-978000-m03:/home/docker/cp-test_ha-978000_ha-978000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m03 "sudo cat /home/docker/cp-test_ha-978000_ha-978000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000:/home/docker/cp-test.txt ha-978000-m04:/home/docker/cp-test_ha-978000_ha-978000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m04 "sudo cat /home/docker/cp-test_ha-978000_ha-978000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp testdata/cp-test.txt ha-978000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3137899585/001/cp-test_ha-978000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m02:/home/docker/cp-test.txt ha-978000:/home/docker/cp-test_ha-978000-m02_ha-978000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000 "sudo cat /home/docker/cp-test_ha-978000-m02_ha-978000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m02:/home/docker/cp-test.txt ha-978000-m03:/home/docker/cp-test_ha-978000-m02_ha-978000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m03 "sudo cat /home/docker/cp-test_ha-978000-m02_ha-978000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m02:/home/docker/cp-test.txt ha-978000-m04:/home/docker/cp-test_ha-978000-m02_ha-978000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m04 "sudo cat /home/docker/cp-test_ha-978000-m02_ha-978000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp testdata/cp-test.txt ha-978000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3137899585/001/cp-test_ha-978000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m03:/home/docker/cp-test.txt ha-978000:/home/docker/cp-test_ha-978000-m03_ha-978000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000 "sudo cat /home/docker/cp-test_ha-978000-m03_ha-978000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m03:/home/docker/cp-test.txt ha-978000-m02:/home/docker/cp-test_ha-978000-m03_ha-978000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m02 "sudo cat /home/docker/cp-test_ha-978000-m03_ha-978000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m03:/home/docker/cp-test.txt ha-978000-m04:/home/docker/cp-test_ha-978000-m03_ha-978000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m04 "sudo cat /home/docker/cp-test_ha-978000-m03_ha-978000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp testdata/cp-test.txt ha-978000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile3137899585/001/cp-test_ha-978000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m04:/home/docker/cp-test.txt ha-978000:/home/docker/cp-test_ha-978000-m04_ha-978000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000 "sudo cat /home/docker/cp-test_ha-978000-m04_ha-978000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m04:/home/docker/cp-test.txt ha-978000-m02:/home/docker/cp-test_ha-978000-m04_ha-978000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m02 "sudo cat /home/docker/cp-test_ha-978000-m04_ha-978000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 cp ha-978000-m04:/home/docker/cp-test.txt ha-978000-m03:/home/docker/cp-test_ha-978000-m04_ha-978000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-978000 ssh -n ha-978000-m03 "sudo cat /home/docker/cp-test_ha-978000-m04_ha-978000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (75.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m15.053744083s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (75.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-650000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-650000 --output=json --user=testUser: (2.97846025s)
--- PASS: TestJSONOutput/stop/Command (2.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-653000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-653000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.837542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8da88f16-ce23-48e8-8f69-406a15d2cb64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-653000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fa90689-bc49-429e-83ab-29c016acd288","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19700"}}
	{"specversion":"1.0","id":"53dc14ee-576a-41b4-aa1a-a7b87c3322dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig"}}
	{"specversion":"1.0","id":"0e84dcd8-0dc0-47d5-8249-9f779ae0d149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e0222670-4845-46c7-bbe1-93b6c8b158aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7852890d-f833-49f6-b2c3-06fba886aca9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube"}}
	{"specversion":"1.0","id":"66306713-f1ed-4d43-b913-62b24cd78beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0e017920-cb72-445f-bfb5-1c9d7ba476b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-653000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-653000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-339000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.834375ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-339000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19700
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19700-1081/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19700-1081/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-339000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-339000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.637167ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-339000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-339000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.631977375s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.677454542s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-339000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-339000: (3.133246875s)
--- PASS: TestNoKubernetes/serial/Stop (3.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-339000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-339000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.613917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-339000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-339000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-164000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-857000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-857000 --alsologtostderr -v=3: (3.435041s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-857000 -n old-k8s-version-857000: exit status 7 (55.940833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-857000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-118000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-118000 --alsologtostderr -v=3: (2.064383666s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-118000 -n no-preload-118000: exit status 7 (55.759541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-118000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-768000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-768000 --alsologtostderr -v=3: (3.38864275s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-768000 -n embed-certs-768000: exit status 7 (60.71825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-768000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-916000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-916000 --alsologtostderr -v=3: (2.125659417s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-916000 -n default-k8s-diff-port-916000: exit status 7 (59.639708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-916000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-773000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-773000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-773000 --alsologtostderr -v=3: (3.248284583s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-773000 -n newest-cni-773000: exit status 7 (63.381166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-773000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-138000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-138000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-138000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-138000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138000"

                                                
                                                
----------------------- debugLogs end: cilium-138000 [took: 2.225748208s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-138000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-138000
--- SKIP: TestNetworkPlugins/group/cilium (2.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-808000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-808000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard